• Users Online: 48
  • Print this page
  • Email this page

Table of Contents
Year : 2020  |  Volume : 3  |  Issue : 4  |  Page : 817-828

Basics of statistics – 4: Sample size calculation (ii): A narrative review

Department of Medical Oncology and Hemato-Oncology, Command Hospital Air Force, Bengaluru, Karnataka, India

Date of Submission15-Aug-2020
Date of Decision15-Sep-2020
Date of Acceptance10-Dec-2020
Date of Web Publication25-Dec-2020

Correspondence Address:
H S Darling
Department of Medical Oncology and Hemato-Oncology, Command Hospital Air Force, Bengaluru - 560 007, Karnataka
Login to access the Email id

Source of Support: None, Conflict of Interest: None

DOI: 10.4103/CRST.CRST_279_20

Get Permissions


Sample size calculation is one of the most crucial aspects of planning clinical trials. Through this series, we aim to delve into the basic concepts of sample size calculation and the various methodologies used for it. For the purpose of this review, a thorough study and selection of the available literature from various sources including PubMed and Medline was conducted. The search terms used were “sample size,” “calculation,” and “oncology,” and 1370 articles were screened. The relevant material has been presented in a simplified format along with appropriate examples mimicking real-world clinical situations. This review article provides a brief overview of the sample size calculation methods used when designing clinical studies. Extensive details about the methods have been avoided to keep the write-up short, crisp, and directed toward the focus. The intricacies of complex topics will be dealt with in the subsequent issues.

Keywords: Basics of statistics, correlation, person-time data, regression, sample size

How to cite this article:
Darling H S. Basics of statistics – 4: Sample size calculation (ii): A narrative review. Cancer Res Stat Treat 2020;3:817-28

How to cite this URL:
Darling H S. Basics of statistics – 4: Sample size calculation (ii): A narrative review. Cancer Res Stat Treat [serial online] 2020 [cited 2021 Jan 20];3:817-28. Available from: https://www.crstonline.com/text.asp?2020/3/4/817/304960

  Introduction Top

(By continually searching, researching, and deliberating, one has realized the true essence of reality).[1] In the last issue, we studied about the concepts and formulae of sample size calculation for precision methods; hypothesis testing for comparison of means; binomial proportions; and for independent, dependent, and matched samples.[2] In this article, we will discuss the sample size calculation with more complex methods such as regression and correlation, and in practical circumstances, namely equivalence trials, crossover trials, and stratified person-time data. Through this review, we aim to present the basic concepts of sample size calculation together in a simplified illustrated form.

  Methods Top

We performed a thorough study, and selection of the available literature from various sources was conducted as depicted in the flow diagram [Figure 1]. The databases used were PUBMED, MEDLINE, and others (including Sri Granth, Oxford Handbook of Medical Statistics and Fundamentals of Biostatistics by Rosner). The search terms used were “sample size,” “calculation,” and “oncology.” A total of 1378 non-duplicate citations were screened. This being a narrative review, articles with relevant information and illustrations meeting the purpose of this review were included and all the other articles were excluded. The extracted material has been presented in a simplified format along with appropriate examples mimicking real-world clinical situations.
Figure 1: The methodology of the study search and selection

Click here to view

  Definitions Top

Some definitions relevant to the current discussion are presented below.

Binomial distribution

It is a type of probability distribution where there are only two possible outcomes. For example, on flipping a coin, we get either a head or a tail. Hence, if we plot the number of flips against the probability of getting a head, it will be a symmetrical, bell-shaped graph with discrete vertical lines.[3]

Binomial proportions

It is defined as the number of successes divided by the number of trials. It is used in studies having binary or dichotomous outcomes. For example, if we compare lung cancer in smokers versus nonsmokers, there are two binomial proportions – first, lung cancer cases in smokers and second, lung cancer cases in nonsmokers. Similarly, if we have three exposure categories, nonsmokers, light smokers, and heavy smokers, then there will be three binomial proportions.[4]

Carry-over effect

In a cross-over study (explained further), when two arms are switched over, the residual effect of the intervention given in period 1 (treatment before cross-over) may interfere with the results in period 2 (treatment after cross-over). This residual effect is called the carry-over effect. Hence, a washout period is sandwiched after completion of period 1 and before entering period 2 to mitigate this effect.[5]

Correlation coefficient

It is a calculated value used to quantify the association between two continuous variables. It is a dimensionless quantity and its value ranges from –1 to 1. It is denoted by the symbol ρ (rho).[5]

Crossover study

It is a type of two-arm study where each arm receives two treatments in turn.[5]

Drop-in rate

It is defined as the proportion of participants in the placebo group that actually receives the active treatment outside the study protocol.[5]

Dropout rate

It is defined as the proportion of participants in the active treatment group that fails to actually receive the active treatment.[5]

Equivalence studies

These studies seek to test if a new treatment is equiefficacious to an existing treatment.[3]

Incidence density

The incidence density in a group is defined as the number of events in that group divided by the total person-time accumulated during the study in that group.[5]

Person-time data

In this type of data analysis, the unit of analysis is the subjects multiplied by the time in the study. For example, in a longitudinal study, some participants may contribute more number of years than others.[5]

Superiority studies

These are the most commonly conducted clinical studies where the hypothesis is to test the superiority of one intervention over the other(s).[5]

  Sample Size Calculation in Complex Situations Top

Sample size estimation in a clinical trial setting[5]

In the previous issue, we estimated the sample size in planned clinical trials assuming that the compliance with comparator interventions would be perfect. However, in reality, compliance may not be perfect. Therefore, sample size estimates will change if noncompliance is factored in. There are two potential fallacies – drop-ins and dropouts.

Sample size estimation to compare two binomial proportions in a clinical trial setting (independent-sample case): Difference from an ideal situation.

Hypothesis testing, H0: P1 = p2 versus H1: P1p2, and | p1p2| = △, A] assuming equal number of participants in each arm, λ1 is the dropout rate and λ2 is the drop-in rate.

If there are no drop-ins or dropouts, then:

But, when we take into account drop-in and dropouts,

Where p1* = (1 − λ1) p1 + λ1p2

p2* = (1−λ2) p2 + λ2p1

△* = (1 − λ1− λ2) △

Suppose a clinical trial is planned to test the efficacy of hydroxychloroquine (HCQ) to prevent the severe acute respiratory syndrome-coronavirus-2 infection in health-care workers exposed to a high-risk environment. Considering the previous data estimate of 10% polymerase chain reaction positivity rate over a 3-month period, how many subjects should be recruited in each arm (HCQ vs. placebo) if 25% reduction in infection risk is clinically meaningful? We expect a 5% dropout rate from the HCQ arm and a 10% drop-in rate in the placebo arm. Let us take 80% power with a two-tailed α of 5%.

p1* = (1 − 0.005) 0.075 + (0.05 × 0.1) = 0.07625

p2* = (1 − 0.1) 0.1 + (0.1 × 0.075) = 0.0975

△* = (1 − 0.05 − 0.1) 0.025 = 0.02125

n1 = n2 = (√[2 × 0.086875 × 0.913125] [1.96] + √[0.07625 × 0.92375 + 0.0975 × 0.9025] [0.84])2/(0.02125)2

= 2480 subjects in each arm.

If we assume complete compliance, then the required sample size will be:

n1 = n2 = (√[2 × 0.0875 × 0.9125] [1.96] + √[0.075 × 0.925 + 0.1 × 0.9] [0.84])2/(0.025)2

= 2003.46.

Hence, we will need fewer subjects in each arm if there is no noncompliance or when it is not factored in.

Sample size estimation for correlation studies – one correlation[5]

Correlation studies are used to quantify the association between two continuous variables by testing the hypothesis H0: ρ = 0 versus H1: ρ = ρ0 > 0,

n = ([z1 - α + z1 − β]2/z02) + 3.

Where z0 is the Fisher's z transformation of the population correlation coefficient under null hypothesis, z0 = 0.5 (ln [1 + ρ0] − ln [1 − ρ0]). z0 can also be taken from the Fisher's z transformation tables, which are easily available online.

For example, we want to know if children with parents who have a high body mass index (BMI) tend to have a higher BMI as compared to the age-matched standard population. For this, we need to perform a Pearson's population correlation coefficient, ρ, whereby its value +1 indicates a strong positive relationship, −1 indicates a strong negative relationship, and 0 indicates no relationship at all. Suppose from the population genetics estimates, we expect the correlation coefficient to be 0.5, then the sample size required to confirm or refute this correlation with 90% power and one-tailed (because we expect the BMI to be equal to or higher than that of the age-matched standard population) significance of 5%, will be:

n = ([z 0.95+z 0.90]2/0.5492) + 3

= ([1.645 + 1.29]2/0.5492) + 3

= 31.58.

Hence, 32 subjects will be required to establish this correlation.

Sample size for two correlations – two continuous variables

To test the hypothesis H0: ρ1 = ρ2 vs. H1: ρ1 ≠ ρ2, the required sample size is[6]

n = 2 ([Z1 − α + Z1 − β]/z1 −z2)2 + 3.

Where z1 and z2 are the Fisher's z transformations of the population correlation coefficient for cohort 1 and cohort 2.

Suppose we want to establish the dependency of the BMI of children on the BMI of their parents. We take one group of children living with their biological parents, assuming the correlation coefficient ρ1 of 0.5. We compare this with the second cohort of children living with their adoptive parents, assuming a ρ2 of 0.1. Assuming equal number of participants in each group, the sample size required for an 80% power and one-tailed significance of 5% will be:

n = 2 (1.645 + 0.84/0.549 − 0.1)2 + 3

= 64.25.

Hence, 65 participants will be required in each group.

If there are constraints to reach the desired sample size, for example, we can only reach a sample size of 40 (n2) in the adoptive parents' group, then n1 can be calculated as:

n1 = nn2 + 3n2 − 6n/2n2n − 3

n1 = 65 (40) + 3 (40) − 6 (65)/2 (40) − 65 − 3 = 194.17.

Hence, if the second group has only forty participants, then the first group will be required to have 195 participants.

Sample size for simple linear regression[7]

If we want to find the correlation between one independent variable x and the other dependent variable y, we can draw a regression line with the help of simple linear regression [Figure 2]. The slope (λ1) of the regression line describes the direct (λ1 > 0), inverse (λ1 < 0), or no (λ1 = 0) relationship between the two variables. Correlation coefficient (as described above) quantifies this relationship. How scattered is the distribution of variables around the regression line is described by the “goodness of fit” test, the “Ftest” (will be discussed in the subsequent issues). The sample size for the hypothesis H0: λ1 = 0 versus H1: λ1 ≠ 0 is calculated as:
Figure 2: Simple linear regression, λ1 >0 (left), λ1 <0 (right)

Click here to view

n = ([zα/2+ z1 − β] ρ/ρ ρx)2.

Where ρ is the standard deviation (SD) of the regression errors calculated as σ = √(σy2 − λ12 σx2),

σx is the SD of the independent variable,

σ is the correlation coefficient calculated as σ = σxλ1y,

λ1 is the slope of the linear regression line.

For example, to derive any correlation between the age of onset of tobacco chewing and the age of onset of head-and-neck squamous cell carcinoma (HNSCC) in central or northeastern India, in subjects aged<40 years, what sample size will be required to conduct a retrospective study with 80% power and a two-tailed α of 5%? Let us presume that the review of previous literature suggests that age of onset of smoking has a SD of 5 years and the age of HNSCC onset has a SD of 8 years. If we expect the slope λ1 to be + 0.1, the sample size required will be calculated as:

n = ([1.96 + 0.842] × 7.86/0.18 × 5)2 = 216.

Hence, 216 subjects will be required.

Sample size for linear regression comparing two slopes[8]

Suppose we want to compare two linear regression slopes, pertaining to the presence or absence of a risk factor [Figure 3], the minimum sample size required to test the hypothesis H0: λ1 = λ2 versus H1: λ1 ≠ λ2 can be calculated as:
Figure 3: Linear regression comparing two slopes

Click here to view

nexposed = ([zα/2 + z1 − β]2 σ2R)/(λ2−λ1)2

nunexposed = k × nexposed

Where σ2R is the variance of the regression errors of both the groups, and is calculated as σ2R = σp2 (1/[k × σ2 × 1] + [1/σ2 × 2]),

σp2 is the pooled variance,

λ1 and λ2 are the slopes of the first and second linear regression lines, respectively.

For example, if we want to compare the pulse rate and body weight of adult patients without (Group 1) and with cancer (Group 2), what sample size will be required for both the arms?

Let us assume that the SDs for pulse rate in group 1 is 11 beats per min and in group 2 is 10 beats per min and we choose to take Group 1 and Group 2 in the ratio (k) of 2:1 (r = 2). Suppose a rough estimate suggests that the regression slopes measure 0.04 kg and 0.06 kg per beat per min, respectively, and assuming a pooled SD for both groups of 0.54, the sample size required to achieve a two-tailed significance of 5% and 80% power can be calculated as:

ncancer = ([1.96 + 0.842]2 0.0042)/0.022 = 86.3, hence 87 patients in Group 2.

nnoncancer = 2 × 87 = 174 patients in Group 1.

Hence, a total of 261 patients will have to be recruited in the study.

Sample size estimation for simple logistic regression[5],[9],[10]

Logistic regression methods allow us to ascertain the association between the various forms of exposure (continuous or categorical) and a binary outcome variable, while also letting us control one or more confounding (categorical or continuous) variables.

Simple logistic regression allows us to find the association between one independent, continuous, normally distributed exposure variable with the other dependent, dichotomous outcome variable, when there are no covariates (or confounding variables). To test the hypothesis H0: β1 = 0, versus H=: β1 = β* ≠ 0 with a two-tailed type I error = α and power = 1 – β, the sample size can be calculated as:

ncont, no cov = (z1 − α/2 + z1 − β)2/(p1 [1 − p1] β*2).

Where β* = projected effect for a 1 SD increase in x1 (the exposure variable) calculated as:

β* = ln (p2 [1 − p1]/p1 [1 − p2]),

p1 = Event rate at the mean of X1,

p2 = Event rate at X1 + 1 SD,

Suppose we want to test if in a cohort of patients with non-small cell lung carcinoma (NSCLC), smokers with higher number of pack years require more frequent hospitalizations with pneumonia. If 30% get admitted at a mean pack years consumption of 25, and at 35 pack years (+1 SD), 50% get admitted, what should be the sample size to test this dependence with 80% power and two-tailed α of 5%? The sample size can be calculated as:

β* = ln ([0.5 × 0.7]/[0.5 × 0.3]) = 0.847.

ncont, no cov = ([1.96 + 0.84]2/[0.3 × 0.7] 0.8472) = 52 patients or 16 events (52 × 0.3) will be required.

Logistic regression with normally distributed exposure variable along with other covariates[5]

Testing the hypothesis:

H0: β1 = 0, β2,…, βk ≠ 0 versus

H1: β1= β*, β2,…, βk ≠ 0,β* ≠ 0

Suppose in the example quoted above (6), there are other independent variables as well, for example, the location of the primary tumor, coexisting chronic obstructive pulmonary disease, or interstitial lung disease, or poorly controlled diabetes mellitus, then we will have to linearly regress our main study variable on the other covariates to control the confounding factors to obtain a value, R2, proportion of variance of x1 explained by x2,…, xk. (The details of regression will be dealt with in the future series.) Hence, the formula for sample size calculation becomes:

ncont, cov = ncont, no cov/(1 − R2),

Considering the same example as above, if we have R2 = 0.7, then

ncont, cov = 52/(1 − 0.7) = 173.33

Hence, 174 patients or 52 events will be required to prove the hypothesis.

Sample size estimation for binary independent variable, with no other covariates[5]

The sample size can be calculated as:

nbin, no cov =

Where p1, p2 = event rates at x1 = 0 and x1 = 1, respectively,

B = proportion of sample where x1 = 1,

p = (1 − B) p1 + Bp2.

Now, using the same example as above (6), suppose we want to compare the need of hospitalization for lower respiratory tract infections (LRTIs) in men (x1 = 1) versus women (x1 = 0) with NSCLC. Suppose 70% of the patients are men, and 40% of the men and 20% of the women get admitted for LRTI. We have to test the hypothesis whether LRTI admissions are gender dependent. The sample size required for a two-tailed type I error and 80% power can be calculated as:

nbin, no cov = (1.96 [0.34 (1 − 0.34)/0.7]1/2 + 0.84 [0.2 (1 − 0.2) + 0.4 (1 − 0.4) (1 − 0.7)/0.7]1/2)2/(0.2 − 0.4)2 (1 − 0.7).

= 208.9.

We calculate p as, p = ([1 − 0.7] 0.2) + (0.7 × 0.4) = 0.34.

Hence, a minimum of 209 patients will be required to prove the hypothesis (146 men and 63 women).

Binary independent variable with covariates present[5]

If we have multiple covariates, such as smoking, location of the tumor, and comorbidities, then the above formula (8) will be further modified to

nbin, cov = nbin, no cov/1 − R2

Again, using R2 = 0.7 in the above example (8)

nbin, cov = 209/(1 − 0.7) = 696.67.

Hence, 697 patients will have to be recruited.

Sample size estimation for equivalence studies[5]

Till now, we have discussed sample size calculation for studies where the alternate hypothesis is that the effects of two treatments are different. Contrary to this, equivalence studies try to prove that both the treatment arms are equiefficacious. The treatments are considered equivalent (in the sense that the experimental treatment [Group 2] is not substantially worse than the standard treatment [Group 1]) if the upper bound of a lower 100% × (1 − α) confidence interval (CI) for p1p2 is ≤δ, if p1 and p2 are treatment success rates in Groups 1 and 2, respectively, and δ is the minimum specified threshold difference to establish equivalence. The sample size for equivalence studies is usually larger than that for superiority studies.

The sample size is calculated as:

n1= ([p1q1+ p2q2/k] [z1-α + z1 − β]2)/(δ – [p1p2])2

Suppose we want to compare the efficacies of treatment modalities of low-risk prostate cancer treated with radical prostatectomy (Group 1) versus radiotherapy (Group 2). We expect 70% recurrence-free survival rates in Group 1 and 65% in Group 2 at 2 years. What sample size will be required to establish a threshold for equivalence of 10% based on an upper bound of a lower 95% CI, with 80% power and equal number of subjects in both the groups. The sample size can be calculated as

n1 = ([0.7 × 0.3 + 0.65 × 0.35] [1.645 + 0.84]2)/(0.1 – [0.7 − 0.65])2

=1080.6 = n2.

Hence, 1081 patients will be required in each group.

Sample size estimation for cross-over studies[5]

Unlike and contrary to the equivalence trials, the cross-over design is helpful in completing studies with a smaller sample size than usual clinical trials. However, the absence of a carry-over effect is a prerequisite.

To test the hypothesis H0: Δ = 0 versus H1: Δ ≠ 0, with equal number of subjects in both arms and two-tailed significance, the sample size can be calculated as:

n = (σ2d[z1 − α/2 + z1 − β]2)/2 Δ2

Where Δ = Benefit for treatment 1 versus treatment 2,

σ2d = Variance (treatment 1 response − treatment 2 response).

For example, if we want to study the effect of an anxiolytic medicine, X, on mean blood pressure (BP), what sample size will be required to test the hypothesis with 90% power? Consider the expected fall in mean BP to be 2 mmHg and the intra-individual variance of mean BP to be 34 mmHg. The drug X and placebo will be taken for 2 weeks (period 1) by Group A and Group B, respectively, followed by a 1-week washout period for both groups. After cross-over, Group B takes drug X for 2 weeks and Group A takes the placebo. The mean BP (an average of three readings) is measured at baseline and at the end of both periods. The sample size will be calculated as:

n = (34 [1.96 + 1.28]2)/2 (2)2 = 44.6.

Hence, 45 subjects in each group will be required to prove the BP-lowering effect of drug X against the placebo.

If the same hypothesis is to be tested without a cross-over design, using the so-called parallel group design, then the sample size formula will be:

n = 2(σ2d[z1 − α/2 + z1 − β]2)/Δ2,

Hence, we will require a four times' larger sample in each (drug X and placebo) group. However, the possibility of confounding by any potential carry-over effect will not be there.

Sample size estimation for comparing binomial proportions from clustered binary data[5]

In the last issue, we have studied the two sample tests for binomial proportions considering that observations in each sample are statistically independent. This may not always be the case. For example, if an individual is contributing more than one observation, there may be intra-individual dependence. This brings the need for a concept of clustered binary data. To test the hypothesis H0: P1 = p2 versus H1: P1p2, where p1 and p2 are success rates in Groups 1 and 2, respectively, the sample size can be calculated as:

m is the average number of observations per individual,

n1, n2 and m1, m2 are the number of individuals and observations in these groups,

ρ is the intraclass correlation coefficient,

C is the clustering correction factor.

Suppose we want to compare the efficacy of the granulocyte colony-stimulating factor (GCSF) with pegylated (PEG)-GCSF in the prevention of episodes of febrile neutropenia (FN) in two groups of patients on chemotherapy. Both groups will be monitored during six cycles of chemotherapy. It is expected that on GCSF, 20% of the chemotherapy sessions will be followed by FN, and reduction of FN to 15% with PEG-GCSF would be clinically significant. What number of patients is needed to be enrolled equally in the two groups to have 80% power and 5% two-tailed significance? Let us assume ρ = 0.6

n1 = n2 = 4 (1.96 √[0.175 × 0.825 × 2] + 0.84 √[0.2 × 0.8 + 0.15 × 0.85])2/6 (0.5)2

= 446.86

Hence, 447 patients will be required in each group.

Sample size estimation for the comparison of two incidence rates from person-time data[5]

Suppose in a longitudinal study we are comparing two arms, one with (Group 1) and one without (Group 2) an exposure, to find out the incidence rate of an outcome variable. To test the hypothesis H0: ID1 = ID2 versus H1: ID1 ≠ ID2, where ID1 and ID2 are incidence densities in Groups 1 and 2, respectively, the number of events required to establish the alternative hypothesis is

m = (√[p0q0z1 − α/2] + √[p1q1z1 − β])2/|p0p1|| 2

And the number of participants required is:

n1= m/(ID1 t*1 + kID2 t*2), n2= kn1,

Where p0= t1/(t1+ t2),

p1= t1 IRR/(t1 IRR + t2),

IRR (incidence rate ratio) = ID1/ID2,

t1, t2= Total number of person-years in Groups 1 and 2, respectively,

t1*, t2* = Average number of person-years per subject in Groups 1 and 2, respectively,

Suppose we want to test if aspirin significantly reduces the chance of colorectal carcinoma (CRC). For this, we take two groups, Group 1 with no aspirin prophylaxis and Group 2 with aspirin prophylaxis, and conduct the study for 5 years. To have 80% power and a two-tailed α = 5%, if k = 1, and if the expected incidence rate of CRC in the control group is 200/100,000 person-years, we anticipate a reduction in the incidence rate in the aspirin group by 20%.

P0 = 0.5.

The required number of events will be:

M = (√[0.5 × 0.5 × 1.96] + √[0.556 × 0.444 × 0.84])2/(0.5 − 0.556)2 = 633.

n1= n2 = 633/([200/100,000][5] + [160/100000] [5]) =35166.66.

Hence, 35167 participants will be required in each group.

Group 1 will have (5 × 35167 × 200)/100,000 = 352 events, and group 2 will have 281 events.

Sample size estimation for the incidence rate for stratified person-time data[5]

This type of (sample size estimation for the comparison of two incidence rates from person-time data) in only one aspect, in that here we are controlling for confounding variables. For example, in the study discussed above, the age-specific incidence rates and the age distribution may be different for different age strata. To account for this confounder, we will use a stratified person-time data method. Suppose we want to test the hypothesis H0: IRR = 1 versus H1: IRR ≠ 1, where IRR = ID1i/ID2i = ratio of incidence densities of control compared with the intervention group subjects in the ith stratum. IRR is assumed to be the same for all strata. For two-tailed significance, the total expected number of CRC cases in both the groups is calculated as:

m = ([z1 − α/2 √ C = z1 −β√D2])/(A − B)2

Where Ai = λipi0, Bi = λipi1, Ci = λipi0 (1 − pi0), Di= λipi1 (1 − pi1); A, B, C, D = Sums of all values of Ai, Bi, Ci, Di,

λi = Gi/G,

Gi= (θi[kip2i+ p1i])/ki+ 1,

Pi0 = t1i/(t1i + t2i), pi1 = t1i IRR/(t1iIRR + t2i).

Total sample size required = n = m/G = m/Σgi.

In order to calculate the sample size for planning a study according to stratified person-time data, we can have the basic calculations from a similar previous study assuming the age distribution and incidence rate ratio to be the same for the population to be tested.

Consider a study where data were collected from participants in different age strata about the intake of aspirin and the occurrence of CRC [Table 1].
Table 1: Age-wise distribution of colorectal cancer cases in users versus nonusers of aspirin

Click here to view

Now, we plan another similar study to compare the occurrence of CRC in users versus nonusers of aspirin to ascertain its protective effect with 80% power and two-tailed significance of 5%. We derive [Table 2] and [Table 3] from [Table 1].
Table 2: Parameters derived from Table 1

Click here to view
Table 3: Parameters derived from Tables 1 and 2

Click here to view

Hence, the expected number of events required will be:

m = (1.96 √[0.199] + 0.84 √[0.217])2/(0.297 − 0.35)2 = 571.4, that is, 572 events are required to be expected from:

n = 225/.769 × 10 − 2 = 29.258.8, that is, 29,259 subjects.

Sample size estimation for the comparison of survival curves[5]

Under the Cox proportional hazards model, there are various methods used for sample size calculation based on survival curves. To test the hypothesis H0: IRR = 1 versus H1: IRR ≠ 1, where IRR = underlying hazard ratio for the experimental Group (E) versus the control Group (C), with a two-tailed test of significance level α, the number of participants for Group E (n1) and Group C (n2), when k = n1/n2, can be calculated as

n1 = mk/(kpE + pC),

Where m = 1/k ([kIRR + 1]/[IRR − 1])2 (z1 − α/2 + z1 −β)2,

And, pE, pC are the failure probabilities over time t in Groups E and C, respectively.

Suppose we want to compare the efficacy of a new drug X (Group E) for renal cell carcinoma with the standard of care control drug (Group C). The desired end point is progression-free survival. IRR (HR group E/C) expected is 0.8. What sample size would be required?

We need to base our calculation on the data available from a similar existing study. Let us take the survival probabilities from [Table 4].
Table 4: Hypothetical survival data from a preexisting study of drug X

Click here to view

From [Table 4], we calculate our desired values as in [Table 5]: pc = D1 + D2 … D6, pe = E1 + E2 + … E6, Di = λiAiCi, Ei = (IRR λi) BiCi, Ai = (1 − λ1) (1 − λ2)… (1 − λi − 1), Bi = (1 − IRR λ0) (1 − IRRλ1)… (1 − IRRλi − 1), Ci = (1 − δ0) (1 − δ1)… (1 − δi − 1).
Table 5: The desired parameters calculated from Table 4

Click here to view

Hence, if k = 1, power = 80%, and two-tailed significance = 5%, then,

m = (1.8/0.2)2 (1.96 + 0.84)2 = 635 events in both arms.

n1 = n2 = 635/(0.796 + 0.717) = 419.69.

Hence, we will require a minimum of 420 patients in each group to compare the survival curves.

  Conclusion Top

Sample size calculation can be done manually using formulae and tables or through various computer-based programs. The more important thing is to analyze and plan the desired clinical and statistical outcomes of the study to determine the required parameters and statistical tests to be fed into the calculation. As we go from simple to complex situations, the formulae and calculations become increasingly complicated. In the next issue, we will focus on the utilization of certain tables and computer software to arrive at a sample size.

Financial support and sponsorship


Conflicts of interest

There are no conflicts of interest.

  Let Us Test What We Have Learned Top

Q 1: Considering the drop-in and dropout rates in the sample size calculation leads to:

  1. Decrease in the power of study
  2. Increase in the sample size
  3. Shortens the study interval
  4. Both a and b.

Q 2: The best way to deal with carry-over effect in cross-over studies is:

  1. To reduce the drug doses
  2. Increase the length of washout period
  3. To take placebo in the comparator arm
  4. Parallel study design.

Q 3: The correlation coefficient, ρ= −0.9 represents:

  1. A strong inverse correlation
  2. A strong direct correlation
  3. A weak inverse correlation
  4. A weak direct correlation.

Q 4: Which of the following studies require the largest sample size:

  1. Equivalence study
  2. Cross-over study
  3. Superiority study
  4. Noninferiority study.

Q 5: The method used to account for the same individual contributing to multiple observations in a study, is:

  1. Logistic regression
  2. Incidence rate ratio
  3. Clustering correction
  4. Person-time data.

Q 6: Choose the false statement regarding stratified person-time data:

  1. It takes into account the number of individuals participating in the study
  2. It takes into account multiple observations by the same individual
  3. It takes into account the number of years contributed by each individual
  4. It takes into account the confounding factors in the study.

Q 7: If x is an exposure variable and y is an outcome variable, which of the following is correct about regression analysis?

  1. X is independent variable, y is dependent variable
  2. Both x and y are interdependent variables
  3. Both x and y are independent variables
  4. X is dependent variable and y is independent variable.

Q 8: In logistic regression, R2 is the proportion of variance of the independent exposure variable explained by other covariates. If R2 = 0.5, what will happen to the sample size as compared to no other covariates.

  1. The sample size will be halved
  2. The sample size will not change
  3. The sample size will be quadrupled
  4. The sample size will be doubled.

Q 9: In regression analysis, if the exposure variable is in kg/m2, the outcome variable:

  1. Has to be in kg/m2
  2. Is a dimensionless entity
  3. Can have any unit
  4. Cannot be in kg/m2.

Q 10: The graph plot in a binomial distribution displays:

  1. Continuous pattern
  2. Discrete pattern
  3. Irregular pattern
  4. Unpredictable pattern.

Answers: 1 (b), 2 (d), 3 (a), 4 (a), 5 (c), 6 (b), 7 (a), 8 (d), 9 (c), 10 (b).1 (b), 2 (d), 3 (a), 4 (a), 5 (c), 6 (b), 7 (a), 8 (d), 9 (c), 10 (b).

  References Top

Sri Granth: Sri Guru Granth Sahib. Available from: http://www.srigranth.org/servlet/gurbani.gurbani?Action=Page&Param=404. [Last accessed on 2020 Aug 15].  Back to cited text no. 1
Darling HS. Basics of statistics-3: Sample size calculation – (i). Cancer Res Stat Treat 2020;3:317-22.  Back to cited text no. 2
  [Full text]  
Peacock JL, Peacock PJ. Oxford Handbook of Medical Statistics. New York: Oxford University Press Inc.; 2011.  Back to cited text no. 3
BINOMIAL PROPORTION. Available from: https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/binoprop.htm. [Last accessed on 2020 Aug 15].  Back to cited text no. 4
Rosner B. Fundamentals of Biostatistics. 8th ed. 20 Channel Center Street, Boston, MA 02210, USA: Cengage Learning; 2015.  Back to cited text no. 5
Unistat Statistics Software | Sample Size and Power-Two Correlations. Available from: https://www.unistat.com/guide/sample-size-and-power-two-correlations/. [Last accessed on 2020 Aug 15].  Back to cited text no. 6
Simple Linear Regression. Available from: https://www2.ccrb.cuhk.edu.hk/stat/epistudies/reg1.htm. [Last accessed on 2020 Aug 15].  Back to cited text no. 7
Linear Regression 2 Slopes. Available from: https://www2.ccrb.cuhk.edu.hk/stat/epistudies/reg2.htm. [Last accessed on 2020 Aug 15].  Back to cited text no. 8
Simple Logistic Regression – Handbook of Biological Statistics. Available from: http://www.biostathandbook.com/simplelogistic.html. [Last accessed on 2020 Aug 15].  Back to cited text no. 9
Logistic Regression Sample Size | Real Statistics Using Excel. Available from: http://www.real-statistics.com/logistic-regression/logistic-regression-sample-size/. [Last accessed on 2020 Aug 15].  Back to cited text no. 10


  [Figure 1], [Figure 2], [Figure 3]

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]


    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

  In this article
Sample Size Calc...
Let Us Test What...
Article Figures
Article Tables

 Article Access Statistics
    PDF Downloaded22    
    Comments [Add]    

Recommend this journal