|Year : 2021 | Volume
| Issue : 2 | Page : 256-261
Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images
K S S Bharadwaj1, Vivek Pawar1, Vivek Punia1, M L V Apparao1, Abhishek Mahajan2
1 Endimension Technology Private Limited (Incubator Under SINE IIT Mumbai), Mumbai, Maharashtra, India
2 Department of Radiodiagnosis, Tata, Memorial Hospital, Homi Bhabha National Institute, Mumbai, Maharashtra, India
|Date of Submission||04-Feb-2021|
|Date of Decision||25-May-2021|
|Date of Acceptance||14-Jan-2021|
|Date of Web Publication||30-Jun-2021|
Department of Radiodiagnosis, Tata Memorial Hospital, Mumbai - 400 012, Maharashtra
Source of Support: None, Conflict of Interest: None
Background: Chest computed tomography (CT) is a readily available diagnostic test that can aid in the detection and assessment of the severity of the coronavirus disease 2019 (COVID-19). Given the wide community spread of the disease, it can be difficult for radiologists to differentiate between COVID-19 and non-COVID-19 pneumonia, especially in the oncological setting.
Objective: This study was aimed at developing an artificial intelligence (AI) algorithm that could automatically detect COVID-19-related abnormalities from chest CT images and could serve as a diagnostic tool for COVID-19. In addition, we assessed the performance and accuracy of the algorithm in differentiating COVID-19 from non-COVID-19 lung parenchyma pathologies.
Materials and Methods: A total of 1581 chest CT images of individuals affected with COVID-19, individuals affected with non-COVID-19 pathologies, and healthy individuals were included in this study. All the digital images of COVID-19-positive cases were obtained from web databases available in the public domain. About 60% of the data were used for training and validation of the algorithm, and the remaining 40% were used as a test set. A single-stage deep learning architecture based on the RetinaNet framework was used as the AI model for image classification. The performance of the algorithm was evaluated using various publicly available datasets comprising patients with COVID-19, patients with pneumonia, other lung diseases (underlying malignancies), and healthy individuals without any abnormalities. The specificity, sensitivity, and area under the receiver operating characteristic curve (AUC) were measured to estimate the effectiveness of our method.
Results: The semantic and non-semantic features of the algorithm were analyzed. For the COVID-19 classification network, the sensitivity, specificity, accuracy, and AUC were 0.92 (95% confidence interval [CI]: 0.85–0.97), 0.995 (95% CI: 0.984–1.0), 0.972 (95% CI: 0.952–0.988), and 0.97 (95% CI: 0.945–0.986), respectively. For the non-COVID classification network, the sensitivity, specificity, and accuracy were 0.931 (95% CI: 0.88–0.975), 0.94 (95% CI: 0.90–0.974), and 0.935 (95% CI: 0.90, 0.965), respectively.
Conclusion: The AI algorithm developed in our study can detect COVID-19 abnormalities from CT images with high sensitivity and specificity. Our AI algorithm can be used for the early detection and timely management of patients with COVID-19.
Keywords: AI, COVID-19,Corona-virus, CT, Deep learning, Imaging
|How to cite this article:|
Bharadwaj K S, Pawar V, Punia V, Apparao M L, Mahajan A. Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images. Cancer Res Stat Treat 2021;4:256-61
|How to cite this URL:|
Bharadwaj K S, Pawar V, Punia V, Apparao M L, Mahajan A. Novel artificial intelligence algorithm for automatic detection of COVID-19 abnormalities in computed tomography images. Cancer Res Stat Treat [serial online] 2021 [cited 2022 Jan 26];4:256-61. Available from: https://www.crstonline.com/text.asp?2021/4/2/256/320146
| Introduction|| |
The coronavirus disease 2019 (COVID-19) is a novel infectious disease that has affected more than 153 million people worldwide as of April 5, 2021. The first case of COVID-19 was reported in December 2019 in the Wuhan city in China, and since then, it has spread to more than 180 countries. On March 12, 2020, the World Health Organization declared the COVID-19 outbreak a pandemic. At the time of writing this article, there are more than 18 million active cases and 3 million deaths due to COVID-19, with a fatality rate of 5.5%., COVID-19 has not only affected the health-care sector but has also impacted the social and economic aspects of people's lives. This emphasizes the need to adopt strategic measures to decrease the burden of COVID-19. Although the role of protective measures has been emphasized in preventing the disease and limiting its spread, a critical way to decrease the burden of COVID-19 would be to improve the detection of the infection at an earlier stage during the course of the disease.
Despite significant advances in the management of COVID-19, it is important to detect the disease at an early stage and isolate the infected individual from the healthy population. The current method of diagnosis of COVID-19 involves confirmation of infection using the real-time reverse transcription polymerase chain reaction (RT-PCR).,, The turnaround time for RT-PCR results is generally between a few hours and 2 days. The pivotal role of computed tomography (CT) scan in diagnosing, triaging, and managing patients with COVID-19 has been described in the literature.,, Several studies have reported that RT-PCR testing though highly specific, is less sensitive., A study from China compared chest CT and RT-PCR results of patients with COVID-19 at the initial presentation and found that the sensitivity of CT scan for COVID-19 detection was 98%, while that of RT-PCR was 71%. Another study comparing chest CT imaging with RT-PCR for COVID-19 detection reported an 88% rate of positive results with chest CT imaging as opposed to 59% with RT-PCR. The high sensitivity of CT imaging makes it ideal for the early detection and diagnosis of COVID-19. Although CT is highly sensitive, it is less specific, with several of its imaging features overlapping with other lung pathologies. Moreover, studies have shown a correlation between the radiological findings and severity of COVID-19, thus further highlighting the importance of CT in the detection of early changes in the lung to prevent disease progression.
Recently, AI algorithms, especially deep learning, have demonstrated remarkable progress in the medical imaging. In this study, we have proposed the use of AI to automatically detect COVID-19 abnormalities from chest CT images, as it can help the radiologists in detecting the early changes in the lungs due to COVID-19. We therefore developed an AI algorithm that can automatically detect COVID-19-related chest abnormalities from CT images and assessed its performance and accuracy in differentiating COVID-19 from non-COVID-19 lung parenchyma pathologies.
| Materials and Methods|| |
General study details
The present study was conducted on publicly available diverse online datasets. No ethics committee approval was required for the study and ethical guidelines outlined in the Declaration of Helsinki, Good Clinical Practice guidelines, and the Indian Council of Medical Research guidelines were followed.
We developed the algorithm using a set of open-source datasets for reproducibility. We trained RetinaNet, a deep neural network model, using publicly available datasets. All the publicly available datasets are labeled and open sourced for use to estimate the effectiveness of our model.
Along with the classification of the patients as COVID-19-positive or -negative, we added a secondary objective to the function to improve the performance of our model. Our model tried to position a bounding box around the COVID-19-infected region on the CT image, which helped it to focus on the right areas to make a prediction, thus improving its performance. This auxiliary objective was used only in the training phase to make the model explainable during the testing phase. It was not used in any of the evaluation metrics.
We followed the standard protocol used in the evaluation of AI systems. We collected openly available, labeled datasets, and combined them to improve the performance of the AI model. We then split the collected data into three parts (training, validation, and test datasets). We used the training and validation sets (60%) to learn the parameters and hyper parameters of the deep neural network. We used the test data (40%) to validate the effectiveness of the model. We measured the sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics to assess the performance and effectiveness of our model.
We collected the data related to the COVID-19-positive cases from online publications, scientific journals, and websites in the public domain. Although we predominantly used data from the American, Italian, and Chinese populations, scattered data from other scientific publications were also used to account for a wide range of geographical and ethnic variabilities. The data were obtained from the Italian Society of Medical and Interventional Radiology and consisted of a large set of COVID-19-positive cases, including the patients' symptoms, prior medical history, and chest radiography and CT images. Similarly, we obtained the COVID-19 CT segmentation dataset that was manually annotated by radiologists and comprised 100 images, from MedSeg. Care was taken to preserve the quality, original resolution, and size of the images. In addition, data were extracted from various research articles,,,,,,,,,,, and websites.,,,,
Our dataset comprised cases representing various stages and severity of the COVID-19 disease. The data extraction process yielded 517 CT images, corresponding to 393 patients, which were labeled as COVID-19-positive. All the COVID-19-positive images were analyzed for semantic features such as the lesion count (single or multiple), lesion laterality (unilateral or bilateral), lesion zones (all or lower or middle or lower-middle or upper), and lesion distribution (central or peripheral or both), and abnormalities such as ground-glass opacity, consolidation, interlobular septal thickening, reticular thickening, fibrotic streaks, vacuolar sign, air bronchogram, bronchial distortion, subpleural line, pleural thickening, effusion, pleural retraction, crazy paving, mediastinal nodes, lung nodules, cavitation, vessel dilatation, and pneumothorax.
We randomly selected the non-COVID-19 and normal CT images from the scans performed at our tertiary care cancer center. The non-COVID-19 CT images were from patients with pneumonia and other lung disorders, and the normal images were from individuals without any abnormalities. Both the non-COVID-19 and normal images served as negative data points for the AI algorithm. We used a total of 1581 images in this study, of which 517 were COVID-19-positive, 590 were non-COVID-19, and 474 were normal images.
Deep-learning model and training
We used a specific type of deep neural network called the convolutional neural network for our model. The architecture of the network was adapted from the RetinaNet framework which is a state-of-the-art network for medical imaging. Additional details of the deep-learning model are provided in the supplementary appendices [Supplementary Appendix 1: Deep Learning Model, Supplementary Appendix 2: Model Training, [Supplementary Figure S-F1], [Supplementary Table S-T1].
We trained our model using the training dataset (50%) to learn the model parameters. We tuned the hyperparameters of the model using the validation dataset (10%). Finally, we used the test dataset (40%) to evaluate the performance of our model. Deep-learning-based models are known to be data hungry, and hence, for better learning, we used 50% of the available data for training purposes. We used 40% of the available data for testing, as we wanted to evaluate the model's performance on as much data as possible. We were able to reliably estimate the hyperparameters of the model with 10% of the available data. Details of the dataset splitting are provided in the supplementary appendix [Supplementary Appendix: Table S-T1].,,
| Results|| |
The sensitivity, specificity, and accuracy metrics for model predictions for the test dataset are given in [Table 1]. The 95% confidence interval (CI) has been used for all the metrics. Our model could detect COVID-19 infection with a sensitivity of 0.92 and specificity of 0.995. Of the 206 COVID-19-positive patients in the test dataset, 189 (91.7%) were classified correctly by our model. For the detection of non-COVID-19 cases, the model sensitivity was 0.93 and specificity was 0.94. Of the 236 non-COVID-19 patients in the test dataset, 220 (93.2%) were classified correctly by our model. Of the 189 normal images, 170 (89.9%) were classified correctly, with a sensitivity of 0.90 and specificity of 0.94. The receiver operating characteristic curve for COVID-19 detection for the test set is shown in [Figure 1]. The corresponding AUC for COVID-19 detection was 0.97 (95% CI, 0.945–0.986). It was observed that the majority of the cases had multiple lesions (88.9%) which were bilateral (80.6%), located in the lower zone (55%), with peripheral (48.9%) or peripheral + central (50%) distribution. The major abnormalities seen were ground-glass opacities (97.2%), followed by reticular thickening (55%), consolidation (42.8%), air bronchogram (32.8%), and interlobular septal thickening (31.1%). Our results indicate that our model was able to distinguish COVID-19 from the non-COVID-19 lung pathologies and exploit the COVID-19-specific patterns to correctly identify the COVID-19-patients, thus reducing the rate of false-positive results. Along with quantitative performance, we also verified qualitative performance of the model. We verified bounding boxes predicted by our model on the test set. The bounding boxes would need to be covering the infected regions of the scans for AI performance to be qualitatively good. [Figure 2] shows the correct predictions of our model in case of COVID-19 presence (bounding boxes over infected regions) and COVID-19 infection absence (no bounding box predictions). [Figure 3] shows incorrect predictions of our model where we had false-positive and false-negative bounding box predictions.
|Figure 1: Receiver operating characteristic curve for coronavirus disease 2019 detection using the test dataset|
Click here to view
|Figure 2: (a) Representative coronavirus disease 2019 images that were correctly classified by the model, with bounding box artificial intelligence predictions superimposed on the input images. (b) Representative noncoronavirus disease 2019 images that were correctly classified by the model, with bounding box artificial intelligence predictions superimposed on the input images. (c) Representative normal images (i, iii) that were correctly classified by the model. artificial intelligence bounding box outputs (ii, iv) are shown alongside the input image|
Click here to view
|Figure 3: (a) Representative coronavirus disease 2019 images that were misclassified by the model, with bounding box artificial intelligence predictions superimposed on the input images. (b) Representative noncoronavirus disease 2019 images that were misclassified as suspicious by the model, with bounding box artificial intelligence predictions superimposed on the input images wherever present. (c) Representative normal images that were misclassified by the model, with bounding box artificial intelligence predictions superimposed on the input images wherever present|
Click here to view
| Discussion|| |
Our COVID-19 detection network demonstrated a sensitivity of 0.92, specificity of 0.995, accuracy of 0.972, and an AUC of 0.97 and could distinguish COVID-19 from other pulmonary pathologies. In addition to predicting class probabilities, our deep-learning AI model could position boxes around the COVID-19 affected areas on the CT images. These regression outputs could be viewed as auxiliary outputs, which were indicative of the areas that the network was focusing on for the classification. [Figure 2]a shows the representative chest CT images of COVID-19-positive patients that were correctly classified by our AI model. The bounding box regression outputs (shown in red) from the model are superimposed on the input. The bounding box outputs indicate that our model indeed could focus on the abnormal regions such as the ground-glass opacities and consolidation patches. It is important to note that our model could detect both early and late-stage COVID-19. [Figure 2]b shows the representative chest CT images of non-COVID-19 patients that were correctly classified by our AI model. The bounding box regression outputs (shown in red) from the model are superimposed on the input. The bounding box outputs indicate that our model could focus on the abnormal regions. [Figure 2]c shows representative chest CT images of normal individuals that were correctly classified by our AI model. The bounding box outputs are shown alongside the input image. The bounding box outputs are blank which indicates that the model could not identify any abnormalities, and hence classified these images as normal.
[Figure 3]a shows the representative chest CT images of COVID-19-positive patients that were mis-classified by our AI model. The bounding box regression outputs (shown in red) from the model are superimposed on the input. The bounding box outputs indicate that our model could identify some abnormalities, but classified them as non-COVID-19. [Figure 3]b shows representative chest CT images of non-COVID-19 patients that were misclassified by our AI model. Most of the misclassified non-COVID-19 images were classified as normal. [Figure 3]c shows the representative chest CT images of normal individuals that were misclassified by our AI model. Most of the misclassified normal images were classified as non-COVID-19. There are several possible strategies to correct these errors. One involves using a second-stage model that focuses on classifying box outputs from the RetinaNet model. Another strategy involves including more diverse non-COVID-19 images in the training dataset so that the model can learn to distinguish better. Our AI model demonstrated a high specificity of 0.995 for the classification of COVID-19-positive patients, which means that normal individuals and non-COVID-19 patients were not erroneously diagnosed as COVID-19 positive.
One previous study has applied deep learning to differentiate COVID-19 from community acquired pneumonia based on the chest CT findings. This study reported a sensitivity of 90%, specificity of 96%, and AUC of 0.96 for the detection of COVID-19. As the dataset used in this study was not openly available, we could not evaluate the performance of our AI model using this dataset for the comparison.
COVID-19 has been declared a pandemic, with several countries imposing a complete lockdown to curtail its spread. With the ongoing vaccination drive, there is still an enormous burden on the health-care workforce. Our AI algorithms can help health-care personnel to automatically screen patients suspected to be COVID-19 positive. In addition, our model provides output regression boxes that can be used to automatically track the progression/regression of the disease.
Our study has a few limitations. First, we had to use two-dimensional images rather than the full three-dimensional (3D) CT scans, as a public dataset of 3D images that is large enough for training an AI model was not available. Second, we used a total of 1581 images, which had to be divided among the training, validation, and testing datasets. The total number of images could have been increased to include a broad spectrum of other pulmonary pathologies or underlying pulmonary malignant lesions. Third, due to the lack of availability of data related to the severity and stage of COVID-19, we could not validate the accuracy of the model for predicting the same. To build an AI model that can predict the disease severity, a wide range of input parameters, including the demographics, clinical presentation, laboratory parameters, and radiological findings, will be required. An algorithm to predict disease severity generated using CT information alone may not be clinically applicable.
The primary aim of our study was to develop a model which was trained, tested, and validated using a demographically, geographically, and ethnically diverse dataset for the diagnosis of COVID-19 with high accuracy. Validating AI algorithms have been a challenge in developing nations such as ours due to lack of availability of standardized datasets. Therefore, we aim to further improve the efficiency of our model by deploying the algorithm at various medical institutions in India to measure its real-world performance, validate transfer learning, and improve patient management capabilities.
| Conclusion|| |
Our AI model can automatically detect COVID-19 from chest CT images with a high sensitivity and specificity and hence can be used for the early detection of the disease and timely management of the patients. In addition, it may serve as a diagnostic tool to differentiate COVID-19 from non-COVID-19 pathologies.
The authors would like to thank Dr. Rajat Agrawal, MBBS, Department of Radiodiagnosis, Tata Memorial Hospital, Mumbai, Maharashtra, IN 400012. E-mail: [email protected]
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| Supplementary Appendices|| |
Supplementary Appendix 1: Deep Learning Model
Our deep-learning model used the RetinaNet, with the Resnet-50 feedforward architecture as the backbone. RetinaNet is a state-of-the-art, single-stage network for object detection. When trained with focal loss, RetinaNet is able to match the speed of the older one-stage detectors while surpassing the accuracy of the two-stage detectors. Focal loss solves the problem of class imbalance in object detection. The feature pyramid network (FPN) was used on top of the ResNet-50 base network to generate a rich, multi-scale, feature pyramid from the input image. FPN is useful for detecting the abnormalities of varying scales and sizes. Our dataset comprised patients with early, intermediate, and late stage infections. As shown in [Supplementary Figure S-F1], RetinaNet has two subnetworks attached to the FPN backbone. The FPN backbone computes convolutional feature maps from the input image. The classification subnet predicts class probability for each anchor box. The box regression subnet is used for regressing the offset from each anchor box to the ground truth object box. Our AI model is available at http://220.127.116.11/covidai/. Users can upload a computed tomography (CT) image in jpg/png/dicom/nii formats. The image is anonymized, uploaded to the cloud, and AI predictions are displayed to the user.
Supplementary Appendix 2: Model Training
Our deep learning model is a ternary classifier that reads an input image and displays the probability scores for three classes, namely coronavirus disease 2019 (COVID-19), non-COVID-19, and normal as an output. During model prediction, the class with the highest probability is chosen. We used a total of 1581 images, out of which 517 images represented COVID-19-positive cases, 590 images represented non-COVID-19 pathologies, and 474 images represented normal CT findings. The training, validation, and testing sets comprised 50.1%, 10%, and 39.9%, respectively, of the complete dataset [Supplementary Table S-T1]. All the images were resized to 512 × 512 pixel. The model weight parameters were optimized using the Adam Optimizer with an initial learning rate of 1e-5 and a mini-batch size of 2. The learning rate scheduler was used to dynamically reduce the learning rate by a factor of 0.5, when there was no improvement for 5 epochs. The model was trained for 50 epochs and the final weights were selected based on the performance in the validation set.
| References|| |
Center JH. Coronavirus COVID-19 Global Cases by Center for Systems Science ad Engineering (CSSE) at Johns Hopkins University; 2020. Available from: https://coronavirus.jhu.edu/map.htm
. [Last accessed on 2021 Jun 13].
Johns Hopkins University. COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). Available from: https://coronavirus.jhu.edu/map.html
. [Last accessed on 2020 Apr 07].
Mahajan A. COVID-19 and its socioeconomic impact. Cancer Res Stat Treat 2021;4:12-8. [Full text]
Kulkarni T, Sharma P, Pande P, Agrawal R, Rane S, Mahajan A. COVID-19: A review of protective measures. Cancer Res Stat Treat 2020;3:244-53. [Full text]
Qayyumi B, Sharin F, Singh A, Tuljapurkar V, Chaturvedi P. Management of COVID-19: A brief overview of the various treatment strategies. Cancer Res Stat Treat 2020;3:233-43. [Full text]
Patil N, Lad A, Rajadhyaksha A, Chadha K, Chheda P, Wadhwa V, et al.
COVID-19: Experience of a tertiary reference laboratory on the cusp of accurately testing 5500 samples and planning scalability. Cancer Res Stat Treat 2020;3 Suppl S1:138-40.
Pande P, Sharma P, Goyal D, Kulkarni T, Rane S, Mahajan A. COVID-19: A review of the ongoing pandemic. Cancer Res Stat Treat 2020;3:221-32. [Full text]
Mahajan A. Recent updates on imaging in patients with COVID-19. Cancer Res Stat Treat 2020;3:351-2. [Full text]
Mahajan A, Sharma P. COVID-19 and radiologist: Image wisely. Indian J Med Paediatr Oncol 2020;41:121-6. [Full text]
Ahuja A, Mahajan A. Imaging and COVID-19: Preparing the radiologist for the pandemic. Cancer Res Stat Treat 2020;3 Suppl S1:80-5.
Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, et al.
Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020;296:E115-7.
Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al.
Correlation of Chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020;296:E32-40.
Sharma PJ, Mahajan A, Rane S, Bhattacharjee A. Assessment of COVID-19 severity using computed tomography imaging: A systematic review and meta-analysis. Cancer Res Stat Treat 2021;4:78-87. [Full text]
Lee JG, Jun S, Cho YW, Lee H, Kim GB, Seo JB, et al.
Deep learning in medical imaging: General overview. Korean J Radiol 2017;18:570-84.
Qu J, Yang R, Song L, Kamel IR. Atypical lung feature on chest CT in a lung adenocarcinoma cancer patient infected with COVID-19. Ann Oncol 2020;31:825-6.
Lin C, Ding Y, Xie B, Sun Z, Li X, Chen Z, et al.
Asymptomatic novel coronavirus pneumonia patient outside Wuhan: The value of CT images in the course of the disease. Clin Imaging 2020;63:7-9.
Kooraki S, Hosseiny M, Myers L, Gholamrezanezhad A. Coronavirus (COVID-19) outbreak: What the department of radiology should know. J Am Coll Radiol 2020;17:447-51.
Yang W, Cao Q, Qin L, Wang X, Cheng Z, Pan A, et al.
Clinical characteristics and imaging manifestations of the 2019 novel coronavirus disease (COVID-19): A multi-center study in Wenzhou city, Zhejiang, China. J Infect 2020;80:388-93.
Xu YH, Dong JH, An WM, Lv XY, Yin XP, Zhang JZ, et al.
Clinical and computed tomographic imaging features of novel coronavirus pneumonia caused by SARS-CoV-2. J Infect 2020;80:394-400.
Zhu Y, Gao ZH, Liu YL, Xu DY, Guan TM, Li ZP, et al.
Clinical and CT imaging features of 2019 novel coronavirus disease (COVID-19). J Infect 2020;81:147-78.
Liu H, Liu F, Li J, Zhang T, Wang D, Lan W. Clinical and CT imaging features of the COVID-19 pneumonia: Focus on pregnant women and children. J Infect 2020;80:e7-13.
Liu D, Li L, Wu X, Zheng D, Wang J, Yang L, et al.
Pregnancy and perinatal outcomes of women with coronavirus disease (COVID-19) pneumonia: A preliminary analysis. AJR Am J Roentgenol 2020;215:127-32.
Cheng Z, Lu Y, Cao Q, Qin L, Pan Z, Yan F, et al.
Clinical features and chest CT manifestations of coronavirus disease 2019 (COVID-19) in a single-center study in Shanghai, China. Am J Roentgenol 2020;215:121-6.
Albarello F, Pianura E, Di Stefano F, Cristofaro M, Petrone A, Marchioni L, et al.
2019-novel Coronavirus severe adult respiratory distress syndrome in two cases in Italy: An uncommon radiological presentation. Int J Infect Dis 2020;93:192-7.
Hao W, Li M. Clinical diagnostic value of CT imaging in COVID-19 with multiple negative RT-PCR testing. Travel Med Infect Dis 2020;34:101627.
Ling Z, Xu X, Gan Q, Zhang L, Luo L, Tang X, et al.
Asymptomatic SARS-CoV-2 infected patients with persistent negative CT findings. Eur J Radiol 2020;126:108956.
Cohen JP, Morrison P, Dao L. COVID-19 image data collection. arXiv preprint arXiv:2003.11597. 2020 [eess.IV].
Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. IEEE Trans Pattern Anal Mach Intell 2020;42:318-27.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,CVPR 770-8; 2016.
Lin TY, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),CVPR,2017. 936-944; 2017. p. 2117-25.
Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 2014.
Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, et al.
Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology 2020;296:E65-71.
Mahajan A, Vaidya T, Gupta A, Rane S, Gupta S. Artificial intelligence in healthcare in developing nations: The beginning of a transformative journey. Cancer Res Stat Treat 2019;2:182-9. [Full text]
[Figure 1], [Figure 2], [Figure 3]