|Year : 2020 | Volume
| Issue : 3 | Page : 545-551
Critical appraisal of a clinical research paper: What one needs to know
Jifmi Jose Manjali, Tejpal Gupta
Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India
|Date of Submission||25-May-2020|
|Date of Decision||11-Jun-2020|
|Date of Acceptance||19-Jun-2020|
|Date of Web Publication||19-Sep-2020|
ACTREC, Tata Memorial Centre, Homi Bhabha National Institute, Kharghar, Navi Mumbai - 410 210, Maharashtra
Source of Support: None, Conflict of Interest: None
In the present era of evidence-based medicine (EBM), integrating best research evidence into the clinical practice necessitates developing skills to critically evaluate and analyze the scientific literature. Critical appraisal is the process of systematically examining research evidence to assess its validity, results, and relevance to inform clinical decision-making. All components of a clinical research article need to be appraised as per the study design and conduct. As research bias can be introduced at every step in the flow of a study leading to erroneous conclusions, it is essential that suitable measures are adopted to mitigate bias. Several tools have been developed for the critical appraisal of scientific literature, including grading of evidence to help clinicians in the pursuit of EBM in a systematic manner. In this review, we discuss the broad framework for the critical appraisal of a clinical research paper, along with some of the relevant guidelines and recommendations.
Keywords: Appraisal, bias, clinical study, evidence-based medicine, guidelines, tools
|How to cite this article:|
Manjali JJ, Gupta T. Critical appraisal of a clinical research paper: What one needs to know. Cancer Res Stat Treat 2020;3:545-51
| Introduction|| |
Medical research information is ever growing and branching day by day. Despite the vastness of medical literature, it is necessary that as clinicians we offer the best treatment to our patients as per the current knowledge. Integrating best research evidence with clinical expertise and patient values has led to the concept of evidence-based medicine (EBM). Although this philosophy originated in the middle of the 19th century, it first appeared in its current form in the modern medical literature in 1991. EBM is defined as the conscientious, explicit, and judicious use of the current best evidence in making decisions about the care of an individual patient. The essentials of EBM include generating a clinical question, tracking the best available evidence, critically evaluating the evidence for validity and clinical usefulness, further applying the results to clinical practice, and evaluating its performance. Appropriate application of EBM can result in cost-effectiveness and improve health-care efficiency. Without continual accumulation of new knowledge, existing dogmas and paradigms quickly become outdated and may prove detrimental to the patients. The current growth of medical literature with 1.8 million scientific articles published in the year 2012, often makes it difficult for the clinicians to keep pace with the vast amount of scientific data, thus making foraging (alerts to new information) and hunting (finding answers to clinical questions) essential skills to help navigate the so-called “jungle” of information. Therefore, it is essential that health-care professionals read medical literature selectively to effectively utilize their limited time and assiduously imbibe new knowledge to improve decision-making for their patients. To practice EBM in its true sense, a clinician not only needs to devote time to develop the skill of effectively searching the literature, but also needs to learn to evaluate the significance, methodology, outcomes, and transparency of the study. Along with the evaluation and interpretation of a study, a thorough understanding of its methodology is necessary. It is common knowledge that studies with positive results are relatively easy to publish., However, it is the critical appraisal of any research study (even those with negative results) that helps us to understand the science better and ask relevant questions in future using an appropriate study design and endpoints. Therefore, this review is focused on the framework for the critical appraisal of a clinical research paper. In addition, we have also discussed some of the relevant guidelines and recommendations for the critical appraisal of clinical research papers.
| Critical Appraisal|| |
Critical appraisal is the process of systematically examining the research evidence to assess its validity, results, and relevance before using it to inform a decision. It entails the following:
- Balanced assessment of the benefits/strengths and flaws/weaknesses of a study
- Assessment of the research process and results
- Consideration of quantitative and qualitative aspects.
Critical appraisal is performed to assess the following
aspects of a study:
- Validity – Is the methodology robust?
- Reliability – Are the results credible?
- Applicability– Do the results have the potential to change the current practice?
Contrary to the common belief, a critical appraisal is not the negative dismissal of any piece of research or an assessment of the results alone; it is neither solely based on a statistical analysis nor a process undertaken by the experts only. When performing a critical appraisal of a scientific article, it is essential that we know its basic composition and assess every section meticulously.
This involves taking a generalized look at the details of the article. The journal it was published in holds special value – a peer reviewed, indexed journal with a good impact factor adds robustness to the paper. The setting, timeline, and year of publication of the study also need to be noted, as they provide a better understanding of the evolution of thoughts in that particular subject. Declaration of the conflicts of interest by the authors, the role of the funding source if any, and any potential commercial bias should also be noted.
| Components of a Clinical Research Paper|| |
The components of any scientific article or clinical research paper remain largely the same. An article begins with a title, abstract, and keywords, which are followed by the main text, which includes the IMRAD – introduction, methods, results and discussion, and ends with the conclusion and references.
It is a brief summary of the research article which helps the readers understand the purpose, methods, and results of the study. Although an abstract may provide a brief overview of the study, the full text of the article needs to be read and evaluated for a thorough understanding. There are two types of abstracts, namely structured and unstructured. A structured abstract comprises different sections typically labelled as background/purpose, methods, results, and conclusion, whereas an unstructured abstract is not divided into these sections.
The introduction of a research paper familiarizes the reader with the topic. It refers to the current evidence in the particular subject and the possible lacunae which necessitate the present study. In other words, the introduction puts the study in perspective. The findings of other related studies have to be quoted and referenced, especially their central statements. The introduction also needs to justify the appropriateness of the chosen study.
This section highlights the procedure followed while conducting the study. It provides all the data necessary for the study's appraisal and lays out the study design which is paramount. For clinical research articles, this section should describe the participant or patient/population/problem (P), intervention (I), comparison (C), outcome (O), and study design (S) PICO(S), generally referred to as the PICO(S) framework [Table 1].
|Table 1: Participant, Intervention, Comparison, Outcome, and (Study) framework|
Click here to view
Study designs and levels of evidence
Study designs are broadly divided into descriptive and interventional studies, which can be further subdivided as shown in [Figure 1]. Each study design has its own characteristics and should be used in the appropriate setting. The various study designs form the building blocks of evidence. This in turn justifies the need for a hierarchical classification of evidence, referred to as “Levels of Evidence,” as it forms the cornerstone of EBM [Table 2]. Most medical journals now mandate that the submitted manuscript conform to and comply with the clinical research reporting statements and guidelines as applicable to the study design [Table 3] to maintain clarity, transparency, and reproducibility and ensure comparability across different studies asking the same research question. As per the study design, the appropriate descriptive and inferential statistical analyses should be specified in the statistical plan. For prospective studies, a clear mention of sample size calculation (depending on the type of study, power, alpha error, meaningful difference, and variance) is mandatory, so as to identify whether the study was adequately powered. The endpoints (primary, secondary, and exploratory, if any) should be mentioned clearly along with the exact methods used for the measurement of the variables.
|Figure 1: Classification of study designs in scientific and clinical research|
Click here to view
|Table 2: Levels of Evidence (adapted from Oxford Center for Evidence-Based Medicine)|
Click here to view
|Table 3: Clinical Research Reporting Statements and Guidelines (according to study design)|
Click here to view
The statistical framework of any research study is commonly based on testing the null hypothesis, wherein the results are deemed significant by comparing P values obtained from an experimental dataset to a predefined significance level (0.05 being the most popular choice). By definition, P value is the probability under the specified statistical model to obtain a statistical summary equal to or more extreme than the one computed from the data and can range from 0 to 1. P < 0.05 indicates that results are unlikely to be due to chance alone. Unfortunately, P value does not indicate the magnitude of the observed difference, which may also be desirable. An alternative and complementary approach is the use of confidence intervals (CI), which is a range of values calculated from the observed data, that is likely to contain the true value at a specified probability. The probability is chosen by the investigator, and it is set customarily at 95% (1– alpha error of 0.05). CI provides information that may be used to test hypotheses; additionally, they provide information related to the precision, power, sample size, and effect size.
This section contains the findings of the study, presented clearly and objectively. The results obtained using the descriptive and inferential statistical analyses (as mentioned in the methods section) should be described. The use of tables and figures, including graphical representation [Table 4], is encouraged to improve the clarity; however, the duplication of these data in the text should be avoided.
|Table 4: Commonly used graphical representation in the clinical research papers|
Click here to view
The discussion section presents the authors' interpretations of the obtained results. This section includes:
- A comparison of the study results with what is currently known, drawing similarities and differences
- Novel findings of the study that have added to the existing body of knowledge
- Caveats and limitations.
It is imperative that the key relevant references are cited in any research paper in the appropriate format which allows the readers to access the original source of the specified statement or evidence. A brief look at the reference list gives an overview of how well the indexed medical literature was searched for the purpose of writing the manuscript.
After a careful assessment of the various sections of a research article, it is necessary to assess the relevance of the study findings to the present scenario and weigh the potential benefits and drawbacks of its application to the population. In this context, it is necessary that the integrity of the intervention be noted. This can be verified by assessing the factors such as adherence to the specified program, the exposure needed, quality of delivery, participant responsiveness, and potential contamination. This relates to the feasibility of applying the intervention to the community.
| Bias in Clinical Research|| |
Research articles are the media through which science is communicated, and it is necessary that we adhere to the basic principles of transparency and accuracy when communicating our findings. Any such trend or deviation from the truth in data collection, analysis, interpretation, or publication is called bias. This may lead to erroneous conclusions, and hence, all scientists and clinicians must be aware of the bias and employ all possible measures to mitigate it.
The extent to which a study is free from bias defines its internal validity. Internal validity is different from the external validity and precision. The external validity of a study is about its generalizability or applicability (depends on the purpose of the study), while precision is the extent to which a study is free from random errors (depends on the number of participants). A study is irrelevant without internal validity even if it is applicable and precise. A bias can be introduced at every step in the flow of a study [Figure 2].
|Figure 2: Typical patient flow in a randomized controlled trial. Note the potential for introducing the various types of bias during each step in the study|
Click here to view
The various types of biases in clinical research include:
- Selection bias: This happens while recruiting patients. This may lead to the differences in the way patients are accepted or rejected for a trial and the way in which interventions are assigned to the individuals. We need to assess whether the study population is a true representative of the target population. Furthermore, when there is no or an inadequate sequence generation, it can result in the over-estimation of treatment effects compared to randomized trials. This can be mitigated by using a process called randomization. Randomization is the process of assigning clinical trial participants to treatment groups, such that each participant has an equal chance of being assigned to a particular group. This process should be completely random (e.g., tossing a coin, using a computer program, and throwing dice). When the process is not exactly random (e.g., randomization by date of birth, odd-even numbers, alternation, registration date, etc.), there is a significant potential for a selection bias
- Allocation bias: This is a bias that sets in when the person responsible for the study also allocates the treatment. It is known that inadequate or unclear concealment of allocation can lead to an overestimation of the treatment effects. Adequate allocation concealment helps in mitigating this bias. This can be done by sequentially numbering identical drug containers or through central allocation by a person not involved in study enrollment
- Confounding bias: Having an effect on the dependent and independent variables through a spurious association, confounding factors can introduce a significant bias. Hence, the baseline characteristics need to be similar in the groups being compared. Known confounders can be managed during the selection process by stratified randomization (in randomized trials) and matching (in observational studies) or during analysis by meta-regression. However, the unknown confounders can be minimized only through randomization
- Performance bias: This is a bias that is introduced because of the knowledge about the intervention allocation in the patient, investigator, or outcome assessor. This results in ascertainment or recall bias (patient), reporting bias (investigator), and detection bias (outcome assessor), all of which can lead to an overestimation of the treatment effects. This can be mitigated by blinding – a process in which the treatment allocation is hidden from the patient, investigator, and/or outcome assessor. However, it has to be noted that blinding may not be practical or possible in all kinds of clinical trials
- Method bias: In clinical trials, it is necessary that the outcomes be assessed and recorded using valid and reliable tools, the lack of which can introduce a method bias
- Attrition bias: This is a bias that is introduced because of the systematic differences between the groups in the loss of participants from the study. It is necessary to describe the completeness of the outcomes including the exclusions (along with the reasons), loss to follow-up, and drop-outs from the analysis
- Other bias: This includes any important concerns about biases not covered in the other domains.
In the recent times, it has become an ethical as well as a regulatory requirement in most countries to register the clinical trials prospectively before the enrollment of the first subject. Registration of a clinical trial is defined as the publication of an internationally agreed upon set of information about the design, conduct, and administration of any clinical trial on a publicly accessible website managed by a registry conforming to international standards. Apart from improving the awareness and visibility of the study, registration ensures transparency in the conduct and reduces publication bias and selective reporting. Some of the common sites are the ClinicalTrials. gov run by the National Library of Medicine of the National Institutes of Health (https://clinicaltrials. gov), Clinical Trials Registry-India (https://www. ctri. nic. in) run by the Indian Council of Medical Research, and the International Clinical Trials Registry Platform (https://www. who. int/ictrp) run by the World Health Organization.
Tools for critical appraisal
Several tools have been developed to assess the transparency of the scientific research papers and the degree of congruence of the research question with the study in the context of the various sections listed above [Table 5].
|Table 5: List of tools used for critical appraisal of scientific articles (based on the type of study)|
Click here to view
Bad ethics cannot produce good science. Therefore, all scientific research must follow the ethical principles laid out in the declaration of Helsinki. For clinical research, it is mandatory that team members be trained in good clinical practice, familiarize themselves with clinical research methodology, and follow standard operating procedures as prescribed. Although the regulatory framework and landscape may vary to a certain extent depending upon the country where the research work is conducted, it is the responsibility of the Institutional Review Boards/Institutional Ethics Committees to provide study oversight such that the safety, well-being, and rights of the participants are adequately protected.
| Conclusions|| |
Critical appraisal is the systematic examination of the research evidence reported in the scientific articles to assess their validity, reliability, and applicability before using their findings to inform decision-making. It should be considered as the first step to grade the quality of evidence.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: What it is and what it isn't. BMJ 1996;312:71-2.
Al-Almaie SM, Al-Baghli NA. Evidence based medicine: An overview. J Family Community Med 2003;10:17-24.
Guyatt GH. Evidence-based medicine. ACP J Club 1991;14:A16.
Masic I, Miokovic M, Muhamedagic B. Evidence based medicine New approaches and challenges. Acta Inform Med 2008;16:219-25.
Chi Y. Global trends in medical journal publishing. J Korean Med Sci 2013;28:1120-1.
Shaughnessy AF, Slawson DC. Introduction to information mastery. In: Rosser WW, Slawson DC, Shaughnessy AF, editors. Information Mastery: Evidence-Based Family Medicine. 2nd ed. London: BC Decker Hamilton; 2004. p. 1-4.
Gluud LL. Bias in clinical intervention research. Am J Epidemiol 2006;163:493-501.
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in clinical research. Lancet 1991;337:867-72.
Hill A, Spittlehouse C. What is critical appraisal? London: Hayward Medical Communications; 2001.
Macinnes A, Lamont T. Critical appraisal of a research paper. Scott Univ Med J 2014;3:10-7.
du Prel JB, Röhrig B, Blettner M. Critical appraisal of scientific articles: Part 1 of a series on evaluation of scientific publications. Dtsch Arztebl Int 2009;106:100-5.
Darling HS. Basics of statistics – 2: Types of clinical studies. Can Res Stat Treat 2020;3:100-9.
Darling HS. Basics of statistics – 3: Sample size calculation. Can Res Stat Treat 2020;3:317-22.
Darling HS. Basics of statistics – 1. Can Res Stat Treat 2019;2:163-8.
Šimundić AM. Bias in research. Biochem Med 2013;23:12-5.
Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, et al
. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928.
Schulz KF, Chalmers I, Hayes RJ, Altman DG. Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 1995;273:408-12.
Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg 2010;126:619-25.
Podsakoff PM, MacKenzie SB, Podsakoff NP. Sources of method bias in social science research and recommendations on how to control it. Annu Rev Psychol 2012;63:539-69.
[Figure 1], [Figure 2]
[Table 1], [Table 2], [Table 3], [Table 4], [Table 5]