Next Article in Journal
Assessment of Iodine Contrast-To-Noise Ratio in Virtual Monoenergetic Images Reconstructed from Dual-Source Energy-Integrating CT and Photon-Counting CT Data
Next Article in Special Issue
Predicting Visual Acuity in Patients Treated for AMD
Previous Article in Journal
Machine Learning Coronary Artery Disease Prediction Based on Imaging and Non-Imaging Data
Previous Article in Special Issue
Deep Learning-Based Reconstruction vs. Iterative Reconstruction for Quality of Low-Dose Head-and-Neck CT Angiography with Different Tube-Voltage Protocols in Emergency-Department Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs

1
Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany
2
Department of Gastroenterology, University Hospital Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany
3
Department of Internal Medicine IV, University Hospital Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany
4
Emergency Department, University Hospital Augsburg, Stenglinstraße 2, 86156 Augsburg, Germany
*
Author to whom correspondence should be addressed.
Submission received: 28 May 2022 / Revised: 10 June 2022 / Accepted: 12 June 2022 / Published: 14 June 2022
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)

Abstract

:
Artificial intelligence is gaining increasing relevance in the field of radiology. This study retrospectively evaluates how a commercially available deep learning algorithm can detect pneumonia in chest radiographs (CR) in emergency departments. The chest radiographs of 948 patients with dyspnea between 3 February and 8 May 2020, as well as 15 October and 15 December 2020, were used. A deep learning algorithm was used to identify opacifications associated with pneumonia, and the performance was evaluated by using ROC analysis, sensitivity, specificity, PPV and NPV. Two radiologists assessed all enrolled images for pulmonal infection patterns as the reference standard. If consolidations or opacifications were present, the radiologists classified the pulmonal findings regarding a possible COVID-19 infection because of the ongoing pandemic. The AUROC value of the deep learning algorithm reached 0.923 when detecting pneumonia in chest radiographs with a sensitivity of 95.4%, specificity of 66.0%, PPV of 80.2% and NPV of 90.8%. The detection of COVID-19 pneumonia in CR by radiologists was achieved with a sensitivity of 50.6% and a specificity of 73%. The deep learning algorithm proved to be an excellent tool for detecting pneumonia in chest radiographs. Thus, the assessment of suspicious chest radiographs can be purposefully supported, shortening the turnaround time for reporting relevant findings and aiding early triage.

1. Introduction

The AWMF (Guidance Manual and Rules for Guideline Development) recommends chest radiographs (CRs) for patients clinically suspected of community-acquired or hospital-acquired pneumonia [1]. As a result, patients with dyspnea or presenting other respiratory symptoms for pneumonia in the emergency department are usually given a CR. CRs have the advantage of lower radiation exposure, faster feasibility and better equipment portability compared to other imaging modalities, e.g., computed tomography (CT) [2,3]. This diagnostic examination can provide supplemental and timely information regarding a patient’s cardiopulmonary condition and probable changes (acute and chronic) from COVID-19 infection [4,5]. Additionally, the ongoing COVID-19 pandemic has been challenging healthcare systems all around the world since December 2019. Accordingly, the number of CRs performed is increasing. In the context of high levels of infection and an increasing number of variants of concern, the early detection and isolation of patients is very important. This especially challenges radiology departments. Studies have shown that with faster reporting of pneumonia in CRs, the median length of hospital stays is significantly shorter, the likelihood of receiving appropriate therapy is higher, and the probability of infectious spread is lower [6,7].
However, the interpretation of CR examinations is variable and examiner-dependent [8,9]. To increase the sensitivity and specificity of imaging patterns for pneumonia in CR, deep learning (DL) algorithms must become more prevalent. Prior studies have shown that the use of artificial intelligence (AI) significantly improves the detection of pneumonia in CR [10,11,12,13,14,15,16,17,18,19]. However, the number of relevant studies is comparatively low [11,16].
Given the large number of examinations, reporting using AI can highlight CRs with abnormalities, helping to prioritize reporting by radiologists. Further, where CRs are initially evaluated by clinicians outside regular operations, AI can be of assistance. In this situation, a well-functioning evaluation of CR by AI can significantly support clinicians’ decision making.
This retrospective study evaluates the accuracy of a commercially available DL algorithm in detecting any kind of pneumonia in CR, aiming to prove its reliability and robustness in a large real-world patient collective during the COVID-19 pandemic in Germany. In addition, the assumed specific imaging patterns of COVID-19 and non-COVID-19 pneumonias are contrasted according to an evaluation performed by experienced radiologists and evaluated for their predictive value.

2. Materials and Methods

The conduction of this retrospective study was approved by the institutional review board (“Beratungskommission für klinische Forschung—BKF”, Augsburg; ID: BKF2020-28). All data were fully anonymized. The ethics committee waived the requirement for informed consent.

2.1. Patient Collective

This study includes a total of 948 patients of legal age who presented with respiratory symptoms raising the suspicion of a pulmonal infection necessitating hospitalization. All patients received a CR on admission directly in the emergency department or after referral from external clinics and practices. This study includes data from the first and second waves of the COVID-19 pandemic. First-wave data recording began with the first patient testing positive in our hospital on 3 February 2020 and lasted until 8 May 2020 (n = 321 patients). The second wave started on 15 October 2020 and included all patients until 15 December 2020 (n = 627). Sex, age, confirmed COVID-19 infection by repeated RT-PCR and, if present, data about other pulmonary pathogens were included in the data collection. All other pulmonary pathogens were summarized as non-COVID-19 infections and are not investigated in more detail.

2.2. Imaging Evaluation

All CRs were digitally recorded and conducted in compliance with the applicable regional statutory requirements. For this study, only the first CR on admission was included. All available projections were used. The retrospective assessment of the radiographs by the radiologists and the AI program were carried out separately and independently of each other. A senior resident and an experienced senior radiologist evaluated the images in consensus. Both assessors were blinded to further patient treatment, pathogen detection status or COVID-19 status. The presence of opacifications or consolidations consistent with pneumonia was determined. Additionally, the imaging pattern was assigned to one of the following categories: bilateral and predominantly peripherally located opacifications/consolidations were classified as typical for a COVID-19 infection; the unilateral presence of predominantly peripherally located opacifications/consolidations with no or minimal signs of infection on the contralateral side was rated as almost typical; and opacifications/consolidations limited to one pulmonary lobe consistent with lobar pneumonia were rated as non-typical for a COVID-19 infection. All changes that could not be clearly assigned to one of the aforementioned categories were rated as indeterminate. All images without any signs of infection were classified as none. Figure 1 shows examples of the distribution patterns described (Figure 1).
All CRs included were anonymized, exported as DICOM data and retrospectively analyzed by the Lunit INSIGHT CXR3 DL algorithm (https://insight.lunit.io (accessed on 17 December 2020)). Note that this program only analyzes anterior–posterior or posterior–anterior projections. The algorithm identifies ten radio-morphological pathologies including pulmonal nodules, pneumothorax, fibrosis, atelectasis, cardiomegaly, calcification, pleural effusion, pneumoperitoneum, mediastinal widening and consolidations/opacifications. For this study, we solely focused on the “consolidation/opacification” score, which was validated for detecting all types of pneumonia causing opacification and consolidation [20]. The retrospective analysis was based on abnormality scores. The scores range from 0 to 100 with a threshold of 15 for discriminating normal (<15) from abnormal (≥15) image patterns. The results of the AI-based assessments were compared with the radiologists´ interpretation as the reference method. The AUROC, sensitivity and specificity were evaluated.

2.3. Laboratory Testing

COVID-19 infections were confirmed by repeated oronasal swabs or bronchoalveolar lavage and analyzed by RT-PCR. Other suspected pulmonary pathogens were not regularly confirmed by laboratory testing and are not further distinguished in this study.

2.4. Statistical Analysis

Statistical analysis was performed using R version 4.0.3 (https://www.r-project.org/ (accessed on 10 February 2021)). Demographic data are shown as the median and corresponding ranges. Categorical parameters are given in total numbers and percentages. Diagnostic performance of the DL algorithm was analyzed using receiver operating characteristic (ROC) and the corresponding area under the receiver operating curve (AUROC) with the assessment of the radiologists as the reference standard. Sensitivity and specificity as well as positive (PPV) and negative predictive values (NPV) were calculated including the corresponding 95% confidence interval (CI). To compare categorical variables, the Chi-square test was used. A p-value ≤ 0.05 was defined as statistically significant.

3. Results

3.1. Patient Collective

A total of 948 patients with a median age of 73 years (range: 18–99 years) were included in this retrospective study (400 female/548 male). A total of 569 patients (60%) tested positive for COVID-19 infection by RT-PCR (237 female/332 male).

3.2. Identification of Pneumonia by Radiologists

Of the 948 CRs performed, radiologists diagnosed consolidations/opacifications consistent with pneumonia in 560 examinations (59.07%). In 388 CRs, no radio-morphological changes for pneumonia were found (40.93%). Because of the ongoing COVID-19 pandemic, the present consolidations/opacifications consistent with pneumonia were classified with regard to a possible COVID-19 infection. The pre-defined and assigned distribution patterns of opacifications/consolidations differed significantly between patients with and without a confirmed COVID-19 infection (p < 0.0001) (Figure 2).
Of the 569 confirmed COVID-19 cases within the patient collective, 401 (70.5%) exhibited consolidations/opacifications of varying degrees consistent with a viral pneumonia in their CRs. Among these patients, a typical distribution pattern for predicting a COVID-19 infection corresponds to a sensitivity of 35.9% (CI: 31.2–40.6) and a specificity of 86.8% (CI: 81.5–92.1) with a PPV of 87.3% (CI: 82.8–92.4) and NPV of 34.9% (CI: 30.2–39.6). If the category almost typical is included for predicting a COVID-19 infection, the sensitivity increases to 50.5% (CI: 45.7–55.5), while specificity decreases to 73.0% (CI: 66.1–79.9) with a PPV of 82.5% (CI: 77.8–87.3) and NPV of 36.9% (31.6–42.3).

3.3. AI-Based Diagnostic Performance

The diagnostic accuracy of the DL algorithm in detecting pneumonia proved to be excellent compared to the radiologists’ assessment, with an AUROC value of 0.923 (Figure 3). For the predefined consolidation threshold value of 15, the sensitivity in detecting pneumonia was 95.4% (CI: 93.6–97.1) with a specificity of 66.0% (CI: 61.3–70.7), a PPV of 80.2% (77.2–83.2) and an NPV of 90.8% (87.4–94.2).
The AI-based evaluation was able to detect 534 out of 560 cases (95.4%) with pneumonia. In 132 CRs, the DL algorithm detected opacifications/consolidations, whereas radiologists did not. During a reassessment by both radiologists, the absence of signs of infection was reconfirmed. In most cases, no abnormality could be detected (see Figure 4).
However, in some cases, dystelectasis or large pleural effusions might have been misinterpreted by the algorithm. Figure 5 shows examples of CR that have been misinterpreted by the DL algorithm (see Figure 5). Shifting the recommended threshold from 15 to a higher value did not lead to a better evaluation. In 26 CR, AI detected no opacifications, whereas radiologists identified imaging patterns consistent with pneumonia.

4. Discussion

In the context of the current pandemic, the role of imaging in rapid and accurate diagnosis has become enormously important. It has been shown several times that different imaging modalities can make a valuable contribution in this regard, whether in the assessment of acute or chronic pulmonary changes [21,22,23]. With this study, we demonstrated that the commercially available DL algorithm used was able to identify consolidations/opacifications associated with pneumonia on plain CR with a very high sensitivity (95.4%) and in the context of an automated pre-evaluation with an absolutely acceptable specificity of 66%, a PPV of 80.2% and an NPV of 90.8%. The algorithm worked robustly in this large real-world patient collective, regardless of the kind of pneumonia. In addition, with respect to the ongoing COVID-19 pandemic, radiologists were able to identify radio-morphological imaging features consistent with a COVID-19 infection based on CR reaching a sensitivity of 50.5% and a specificity of 73%.
Despite the superior accuracy of pulmonary CT, CRs are one of the most important imaging modalities in emergency departments worldwide due to their feasibility. Accelerating the reporting of CRs with relevant abnormalities is of high relevance and can lead to faster treatment and, if necessary, the isolation of the patient. In CRs, consolidations and opacifications are assumed to be mainly caused by infectious diseases [24,25]. Yet, the interpretation of CR, especially in regard to pneumonia, is subject to high variability even among radiologists [8,9]. This issue is even more prevalent where CRs are performed by clinicians in the absence of radiologists. Studies have shown that relevant misinterpretations of pneumonic patterns in CRs occur between the interpretations of CR in radiologists and clinicians [26,27]. Accordingly, the additional evaluation and highlighting of findings in CRs by AI is advisable. Recent studies have proved that with the help of DL algorithms the diagnosis of pneumonia (among others, including COVID-19 infections) can be detected with high sensitivity in CR [10,14,18,28]. Tajmir et al. even showed that with the help of AI, the inter-observer variability in the interpretation of imaging can be reduced [29]. Another advantage of using AI compared to human-based analysis is the non-existence of fatigue, leading to the risk of missing significant abnormalities in CRs [30]. In this respect, a robust DL algorithm supports rapid and reliable reporting. An integrated AI tool could highlight suspicious patterns, increase radiologists’ and clinicians’ awareness of suspicious pulmonary patterns and lead to more objective diagnoses.
In 132 patients, the AI falsely assessed the presence of pneumonia. However, in some of these patients, large pleural effusions were found, which may also require urgent treatment. In the study by Jang et al., false positive interpretations by the DL algorithm were attributed to increased vascular marking, emphysematous changes, interstitial thickening and subsegmental atelectasis [11].
All 26 falsely negative classified cases for signs of pneumonia by the DL algorithm showed slight radio-morphological pathologies according to the radiologists. Radiologists’ classifying of these mild changes as pneumonia may be attributable to the high prevalence of COVID-19 infections during the pandemic and the respiratory symptoms of the patients. This represents a human bias to which the AI is not subject. The commercially available product used in this study is the Lunit INSIGHT CXR3. It comprises ten radio-morphological patterns including pulmonal nodules, pneumothorax, fibrosis, atelectasis, cardiomegaly, calcification, pleural effusion, pneumoperitoneum, mediastinal widening and consolidations/opacifications. Other studies have proved that this program’s performance in detecting the aforementioned imaging patterns is excellent [11,31,32]. With the recommended cut-off value of 15 for the consolidation score, a high sensitivity (95.4%) and moderate specificity (66.0%) were achieved, which is optimal for application in the context of an automated preliminary assessment. Therefore, the applied cut-off value should identify all patients with CR as suspicious for consolidations/opacifications, even in a post-pandemic scenario. The high accuracy of AI-based analyses of opacifications/consolidations in CR is important during pre-testing for the prioritization of further assessments. During our evaluation, AI showed an excellent AUROC value of 0.923. This value is similar to the published AUROC value of 0.921 by Jang et al. They evaluated the accuracy of AI on 279 COVID-19-positive individuals [11]. We can conclude, therefore, that the DL algorithm works robustly. Jang et al. reported comparable sensitivity (95.6%) but a significantly higher specificity of 88.7%. This difference is explained by the fact that only patients with confirmed COVID-19 infections were included, in contrast to the representative real-world population including all patients with respiratory symptoms used in this study. Therefore, the Lunit INSIGHT CXR3-based analysis of opacifications/consolidations is not limited to detecting COVID-19 infections and has been applied in real-life scenarios without any prior patient selection.
As the COVID-19 pandemic is still prevalent, we controlled the radio-morphological changes regarding COVID-19 pneumonia.
The classification of radio-morphological findings into ‘typical’, ‘non-typical’ or ‘indeterminate’ for COVID-19 infection has been widely applied during the COVID-19 pandemic. When unilateral peripheral opacifications/consolidations were added to the ‘typical’ class, COVID-19 infections were identified with a sensitivity of 50.5%, a specificity of 73.0%, a PPV of 82.5% and an NPV of 36.9%. This is important, since unilateral morphological changes as a distribution pattern have so far only been included in a few studies, although they are also described as a typical pattern for COVID-19 infection [33,34].
The sensitivity for correctly detecting COVID-19 infections in CR in our study appears low (50.5%). Cozzi et al. achieved a sensitivity of 89–92.8% in their real-world analysis during the pandemic [35]. However, their study reports a low to moderate specificity of 40.7–66.0%, while we achieved a specificity of 86.8%.
In the ongoing pandemic, the faster detection of pulmonary infections by analysis of CRs may lead to faster isolation of patients and therefore might reduce the risk of spreading the infection. Nevertheless, prospective studies need to follow with a focus on the practicability of AI use in clinical processes and integration into clinical decision making [36]. This is of special relevance as some studies showed ambivalent results when combining AI with clinical processes [28,37]. A beneficial effect on the workflow and turnaround times is not readily assured. However, the combined reporting of images by AI and radiologists is a promising model. The high sensitivity of AI can be complemented by the comparatively high specificities of radiologists. Thus, the best possible assessment of a CR could be achieved.
The main limitation of this analysis is its retrospective character. The achievable time benefit is therefore theoretical and largely dependent on the way the software is integrated into existing infrastructure. However, the used AI software is commercially available and can usually be put into operation within two weeks. By blinding the radiologists to the further treatment of the patients or the results of COVID-19 tests, a realistic scenario for interpreting CR in the emergency department was simulated. However, in comparison to the AI, radiologists still have the advantage of including more information about the patient in their interpretation, such as blood parameters, symptoms and known comorbidities. Although some programs already have the functionality to take up this information, their use is not yet widespread [38]. This study did not investigate the possible integration of AI into clinical routines, but that integration is a very important question and should be the subject of future studies. Another factor limiting the applicability of the results to other populations is the comparatively high prevalence of pneumonic infections during the pandemic, which could have biased the results. Thus, it would be more accurate to speak of a real-pandemic collective rather than a real-world one.

5. Conclusions

Based on the currently largest-available patient collective assessed during the COVID-19 pandemic, this study demonstrates the high sensitivity (95.4%) and acceptable specificity (66%) of a DL algorithm in detecting pneumonia in CRs. The corresponding AUROC value reached an excellent value of 0.923, underlining its robustness, which was not necessarily guaranteed in advance. In addition, based on the evaluation of specific patterns of findings by experienced radiologists, the diagnosis of COVID-19 pneumonia was achieved with a sensitivity of 50.5% and a comparatively high specificity of 73%. The combination of the algorithm’s high sensitivity with the radiologists’ high specificity seems to be of great value. The pre-selection and highlighting of suspicious CR by AI could hasten the reporting of pulmonal infections and therefore may shorten the turnaround time, aiding faster clinical decision-making. However, the integration of AI into everyday clinical practice remains an open challenge and is still subject to further investigation.

Author Contributions

Conceptualization, J.B., J.A.D., C.R., F.S., T.K. and C.S.-M.; data curation, J.B., J.A.D., C.R., M.K., H.M., M.W., F.S., T.K. and C.S.-M.; formal analysis, J.B., J.A.D., F.S., T.K. and C.S.-M.; investigation, J.B., J.A.D., C.R., M.K., H.M., M.W., F.S., T.K. and C.S.-M.; methodology, J.B., J.A.D., F.S., T.K. and C.S.-M.; project administration, C.S.-M.; resources, J.B., J.A.D., C.R., M.K., H.M., M.W., F.S., T.K. and C.S.-M.; software, J.B., J.A.D., C.R., M.K., H.M., M.W., F.S., T.K. and C.S.-M.; supervision, C.S.-M.; validation, J.B., J.A.D., F.S., T.K. and C.S.-M.; visualization, J.B., J.A.D., F.S. and C.S.-M.; writing—original draft, J.B. and C.S.-M.; writing—review and editing, J.B., J.A.D. and C.S.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board (ID: BKF2020-28).

Informed Consent Statement

This retrospective study was approved by the institutional review board. All data were fully anonymized. The ethics committee waived the requirement for informed consent.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AWMF: Detail. Available online: https://www.awmf.org/leitlinien/detail/ll/020-020.html (accessed on 19 February 2022).
  2. Rubin, G.D.; Ryerson, C.J.; Haramati, L.B.; Sverzellati, N.; Kanne, J.P.; Raoof, S.; Schluger, N.W.; Volpi, A.; Yim, J.-J.; Martin, I.B.K.; et al. The Role of Chest Imaging in Patient Management during the COVID-19 Pandemic: A Multinational Consensus Statement from the Fleischner Society. Radiology 2020, 296, 172–180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Sahu, A.K.; Dhar, A.; Aggarwal, B. Radiographic Features of COVID-19 Infection at Presentation and Significance of Chest X-ray: Early Experience from a Super-Specialty Hospital in India. Indian J. Radiol. Imaging 2021, 31, S128–S133. [Google Scholar] [CrossRef] [PubMed]
  4. Myall, K.J.; Mukherjee, B.; Castanheira, A.M.; Lam, J.L.; Benedetti, G.; Mak, S.M.; Preston, R.; Thillai, M.; Dewar, A.; Molyneaux, P.L.; et al. Persistent Post–COVID-19 Interstitial Lung Disease. An Observational Study of Corticosteroid Treatment. Ann. Am. Thorac. Soc. 2021, 18, 799–806. [Google Scholar] [CrossRef] [PubMed]
  5. Baratella, E.; Ruaro, B.; Marrocchio, C.; Starvaggi, N.; Salton, F.; Giudici, F.; Quaia, E.; Confalonieri, M.; Cova, M.A. Interstitial Lung Disease at High Resolution CT after SARS-CoV-2-Related Acute Respiratory Distress Syndrome According to Pulmonary Segmental Anatomy. J. Clin. Med. 2021, 10, 3985. [Google Scholar] [CrossRef]
  6. Bewick, T.; Greenwood, S.; Lim, W.S. The Impact of an Early Chest Radiograph on Outcome in Patients Hospitalised with Community-Acquired Pneumonia. Clin. Med. 2010, 10, 563–567. [Google Scholar] [CrossRef]
  7. Larremore, D.B.; Wilder, B.; Lester, E.; Shehata, S.; Burke, J.M.; Hay, J.A.; Milind, T.; Mina, M.J.; Parker, R. Test Sensitivity Is Secondary to Frequency and Turnaround Time for COVID-19 Surveillance. medRxiv 2020. [Google Scholar] [CrossRef]
  8. Williams, G.J.; Macaskill, P.; Kerr, M.; Fitzgerald, D.A.; Isaacs, D.; Codarini, M.; McCaskill, M.; Prelog, K.; Craig, J.C. Variability and Accuracy in Interpretation of Consolidation on Chest Radiography for Diagnosing Pneumonia in Children under 5 Years of Age. Pediatr. Pulmonol. 2013, 48, 1195–1200. [Google Scholar] [CrossRef]
  9. Hopstaken, R.M.; Witbraad, T.; van Engelshoven, J.M.A.; Dinant, G.J. Inter-Observer Variation in the Interpretation of Chest Radiographs for Pneumonia in Community-Acquired Lower Respiratory Tract Infections. Clin. Radiol. 2004, 59, 743–752. [Google Scholar] [CrossRef]
  10. Fontanellaz, M.; Ebner, L.; Huber, A.; Peters, A.; Löbelenz, L.; Hourscht, C.; Klaus, J.; Munz, J.; Ruder, T.; Drakopoulos, D.; et al. A Deep-Learning Diagnostic Support System for the Detection of COVID-19 Using Chest Radiographs: A Multireader Validation Study. Investig. Radiol. 2020, 56, 348–356. [Google Scholar] [CrossRef]
  11. Jang, S.B.; Lee, S.H.; Lee, D.E.; Park, S.-Y.; Kim, J.K.; Cho, J.W.; Cho, J.; Kim, K.B.; Park, B.; Park, J.; et al. Deep-Learning Algorithms for the Interpretation of Chest Radiographs to Aid in the Triage of COVID-19 Patients: A Multicenter Retrospective Study. PLoS ONE 2020, 15, e0242759. [Google Scholar] [CrossRef]
  12. Murphy, K.; Smits, H.; Knoops, A.J.G.; Korst, M.B.J.M.; Samson, T.; Scholten, E.T.; Schalekamp, S.; Schaefer-Prokop, C.M.; Philipsen, R.H.H.M.; Meijers, A.; et al. COVID-19 on Chest Radiographs: A Multireader Evaluation of an Artificial Intelligence System. Radiology 2020, 296, E166–E172. [Google Scholar] [CrossRef]
  13. Sharma, A.; Rani, S.; Gupta, D. Artificial Intelligence-Based Classification of Chest X-ray Images into COVID-19 and Other Infectious Diseases. Int. J. Biomed. Imaging 2020, 2020, 8889023. [Google Scholar] [CrossRef]
  14. Wehbe, R.M.; Sheng, J.; Dutta, S.; Chai, S.; Dravid, A.; Barutcu, S.; Wu, Y.; Cantrell, D.R.; Xiao, N.; Allen, B.D.; et al. DeepCOVID-XR: An Artificial Intelligence Algorithm to Detect COVID-19 on Chest Radiographs Trained and Tested on a Large US Clinical Dataset. Radiology 2020, 299, 203511. [Google Scholar] [CrossRef]
  15. Zhang, R.; Tie, X.; Qi, Z.; Bevins, N.B.; Zhang, C.; Griner, D.; Song, T.K.; Nadig, J.D.; Schiebler, M.L.; Garrett, J.W.; et al. Diagnosis of COVID-19 Pneumonia Using Chest Radiography: Value of Artificial Intelligence. Radiology 2020, 298, 202944. [Google Scholar] [CrossRef]
  16. Hwang, E.J.; Kim, K.B.; Kim, J.Y.; Lim, J.-K.; Nam, J.G.; Choi, H.; Kim, H.; Yoon, S.H.; Goo, J.M.; Park, C.M. COVID-19 Pneumonia on Chest X-rays: Performance of a Deep Learning-Based Computer-Aided Detection System. PLoS ONE 2021, 16, e0252440. [Google Scholar] [CrossRef]
  17. Rajaraman, S.; Candemir, S.; Kim, I.; Thoma, G.; Antani, S. Visualization and Interpretation of Convolutional Neural Network Predictions in Detecting Pneumonia in Pediatric Chest Radiographs. Appl. Sci. 2018, 8, 1715. [Google Scholar] [CrossRef] [Green Version]
  18. Kim, J.H.; Kim, J.Y.; Kim, G.H.; Kang, D.; Kim, I.J.; Seo, J.; Andrews, J.R.; Park, C.M. Clinical Validation of a Deep Learning Algorithm for Detection of Pneumonia on Chest Radiographs in Emergency Department Patients with Acute Febrile Respiratory Illness. J. Clin. Med. 2020, 9, 1981. [Google Scholar] [CrossRef]
  19. Castiglioni, I.; Ippolito, D.; Interlenghi, M.; Monti, C.B.; Salvatore, C.; Schiaffino, S.; Polidori, A.; Gandola, D.; Messa, C.; Sardanelli, F. Machine Learning Applied on Chest X-ray Can Aid in the Diagnosis of COVID-19: A First Experience from Lombardy, Italy. Eur. Radiol. Exp. 2021, 5, 7. [Google Scholar] [CrossRef]
  20. Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Cohen, J.G.; et al. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw. Open 2019, 2, e191095. [Google Scholar] [CrossRef] [Green Version]
  21. Brogna, B.; Bignardi, E.; Brogna, C.; Volpe, M.; Lombardi, G.; Rosa, A.; Gagliardi, G.; Capasso, P.F.M.; Gravino, E.; Maio, F.; et al. A Pictorial Review of the Role of Imaging in the Detection, Management, Histopathological Correlations, and Complications of COVID-19 Pneumonia. Diagnostics 2021, 11, 437. [Google Scholar] [CrossRef]
  22. Baratella, E.; Ruaro, B.; Marrocchio, C.; Poillucci, G.; Pigato, C.; Bozzato, A.M.; Salton, F.; Confalonieri, P.; Crimi, F.; Wade, B.; et al. Diagnostic Accuracy of Chest Digital Tomosynthesis in Patients Recovering after COVID-19 Pneumonia. Tomography 2022, 8, 1221–1227. [Google Scholar] [CrossRef]
  23. Martínez Redondo, J.; Comas Rodríguez, C.; Pujol Salud, J.; Crespo Pons, M.; García Serrano, C.; Ortega Bravo, M.; Palacín Peruga, J.M. Higher Accuracy of Lung Ultrasound over Chest X-ray for Early Diagnosis of COVID-19 Pneumonia. Int. J. Environ. Res. Public. Health 2021, 18, 3481. [Google Scholar] [CrossRef]
  24. Campbell, H.; Byass, P.; Greenwood, B.M. Acute Lower Respiratory Infections in Gambian Children: Maternal Perception of Illness. Ann. Trop. Paediatr. 1990, 10, 45–51. [Google Scholar] [CrossRef]
  25. Cherian, T.; Simoes, E.; John, T.J.; Steinhoff, M.; John, M. Evaluation of simple clinical signs for the diagnosis of acute lower respiratory tract infection. Lancet 1988, 332, 125–128. [Google Scholar] [CrossRef]
  26. Al aseri, Z. Accuracy of Chest Radiograph Interpretation by Emergency Physicians. Emerg. Radiol. 2008, 16, 111. [Google Scholar] [CrossRef]
  27. Gatt, M.E.; Spectre, G.; Paltiel, O.; Hiller, N.; Stalnikowicz, R. Chest Radiographs in the Emergency Department: Is the Radiologist Really Necessary? Postgrad. Med. J. 2003, 79, 214–217. [Google Scholar] [CrossRef] [Green Version]
  28. Dorr, F.; Chaves, H.; Serra, M.M.; Ramirez, A.; Costa, M.E.; Seia, J.; Cejas, C.; Castro, M.; Eyheremendy, E.; Fernández Slezak, D.; et al. COVID-19 Pneumonia Accurately Detected on Chest Radiographs with Artificial Intelligence. Intell.-Based Med. 2020, 3, 100014. [Google Scholar] [CrossRef]
  29. Tajmir, S.H.; Lee, H.; Shailam, R.; Gale, H.I.; Nguyen, J.C.; Westra, S.J.; Lim, R.; Yune, S.; Gee, M.S.; Do, S. Artificial Intelligence-Assisted Interpretation of Bone Age Radiographs Improves Accuracy and Decreases Variability. Skeletal Radiol. 2019, 48, 275–283. [Google Scholar] [CrossRef]
  30. Taylor-Phillips, S.; Stinton, C. Fatigue in Radiology: A Fertile Area for Future Research. Br. J. Radiol. 2019, 92, 20190043. [Google Scholar] [CrossRef]
  31. Lee, J.H.; Sun, H.Y.; Park, S.; Kim, H.; Hwang, E.J.; Goo, J.M.; Park, C.M. Performance of a Deep Learning Algorithm Compared with Radiologic Interpretation for Lung Cancer Detection on Chest Radiographs in a Health Screening Population. Radiology 2020, 297, 687–696. [Google Scholar] [CrossRef]
  32. Hwang, E.J.; Nam, J.G.; Lim, W.H.; Park, S.J.; Jeong, Y.S.; Kang, J.H.; Hong, E.K.; Kim, T.M.; Goo, J.M.; Park, S.; et al. Deep Learning for Chest Radiograph Diagnosis in the Emergency Department. Radiology 2019, 293, 573–580. [Google Scholar] [CrossRef] [PubMed]
  33. Shi, H.; Han, X.; Jiang, N.; Cao, Y.; Alwalid, O.; Gu, J.; Fan, Y.; Zheng, C. Radiological Findings from 81 Patients with COVID-19 Pneumonia in Wuhan, China: A Descriptive Study. Lancet Infect. Dis. 2020, 20, 425–434. [Google Scholar] [CrossRef]
  34. Chen, N.; Zhou, M.; Dong, X.; Qu, J.; Gong, F.; Han, Y.; Qiu, Y.; Wang, J.; Liu, Y.; Wei, Y.; et al. Epidemiological and Clinical Characteristics of 99 Cases of 2019 Novel Coronavirus Pneumonia in Wuhan, China: A Descriptive Study. The Lancet 2020, 395, 507–513. [Google Scholar] [CrossRef] [Green Version]
  35. Cozzi, A.; Schiaffino, S.; Arpaia, F.; Della Pepa, G.; Tritella, S.; Bertolotti, P.; Menicagli, L.; Monaco, C.G.; Carbonaro, L.A.; Spairani, R.; et al. Chest X-ray in the COVID-19 Pandemic: Radiologists’ Real-World Reader Performance. Eur. J. Radiol. 2020, 132, 109272. [Google Scholar] [CrossRef] [PubMed]
  36. Carlile, M.; Hurt, B.; Hsiao, A.; Hogarth, M.; Longhurst, C.A.; Dameff, C. Deployment of Artificial Intelligence for Radiographic Diagnosis of COVID-19 Pneumonia in the Emergency Department. J. Am. Coll. Emerg. Physicians Open 2020, 1, 1459–1464. [Google Scholar] [CrossRef]
  37. Patel, B.N.; Rosenberg, L.; Willcox, G.; Baltaxe, D.; Lyons, M.; Irvin, J.; Rajpurkar, P.; Amrhein, T.; Gupta, R.; Halabi, S.; et al. Human–Machine Partnership with Artificial Intelligence for Chest Radiograph Diagnosis. NPJ Digit. Med. 2019, 2, 111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Mei, X.; Lee, H.-C.; Diao, K.; Huang, M.; Lin, B.; Liu, C.; Xie, Z.; Ma, Y.; Robson, P.M.; Chung, M.; et al. Artificial Intelligence–Enabled Rapid Diagnosis of Patients with COVID-19. Nat. Med. 2020, 26, 1224–1228. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Chest radiographs with the distinguished distribution patterns regarding the probability of COVID-19 infection. (a) Typical (bilateral, peripheral opacifications), (b) almost typical (unilateral, peripheral opacifications), (c) non-typical (limited to one pulmonary lobe consistent with a lobar pneumonia) and (d) indeterminate (opacifications that could not be clearly classified as typical, almost typical, or non-typical).
Figure 1. Chest radiographs with the distinguished distribution patterns regarding the probability of COVID-19 infection. (a) Typical (bilateral, peripheral opacifications), (b) almost typical (unilateral, peripheral opacifications), (c) non-typical (limited to one pulmonary lobe consistent with a lobar pneumonia) and (d) indeterminate (opacifications that could not be clearly classified as typical, almost typical, or non-typical).
Diagnostics 12 01465 g001
Figure 2. Different distribution patterns in patients with and without COVID-19. Significantly different distribution patterns of opacifications and/or consolidations in chest radiographs between patients with confirmed and ruled-out COVID-19 infections.
Figure 2. Different distribution patterns in patients with and without COVID-19. Significantly different distribution patterns of opacifications and/or consolidations in chest radiographs between patients with confirmed and ruled-out COVID-19 infections.
Diagnostics 12 01465 g002
Figure 3. Diagnostic accuracy of the deep learning algorithm for detecting pneumonia. Opacifications and/or consolidations associated with pneumonia can be detected with a corresponding AUROC value of 0.923.
Figure 3. Diagnostic accuracy of the deep learning algorithm for detecting pneumonia. Opacifications and/or consolidations associated with pneumonia can be detected with a corresponding AUROC value of 0.923.
Diagnostics 12 01465 g003
Figure 4. Performance of the artificial intelligence in detecting pneumonia in chest radiographs. Performance of the artificial intelligence in detecting pneumonia in chest radiographs with the radiologists’ assessment as reference standard.
Figure 4. Performance of the artificial intelligence in detecting pneumonia in chest radiographs. Performance of the artificial intelligence in detecting pneumonia in chest radiographs with the radiologists’ assessment as reference standard.
Diagnostics 12 01465 g004
Figure 5. False positive assignments by the AI. Examples of chest radiographs with false positive assignments by the artificial intelligence possibly caused by large pleural effusions (a,b) or a small dystelectasis in the right lower field (c).
Figure 5. False positive assignments by the AI. Examples of chest radiographs with false positive assignments by the artificial intelligence possibly caused by large pleural effusions (a,b) or a small dystelectasis in the right lower field (c).
Diagnostics 12 01465 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Becker, J.; Decker, J.A.; Römmele, C.; Kahn, M.; Messmann, H.; Wehler, M.; Schwarz, F.; Kroencke, T.; Scheurig-Muenkler, C. Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs. Diagnostics 2022, 12, 1465. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061465

AMA Style

Becker J, Decker JA, Römmele C, Kahn M, Messmann H, Wehler M, Schwarz F, Kroencke T, Scheurig-Muenkler C. Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs. Diagnostics. 2022; 12(6):1465. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061465

Chicago/Turabian Style

Becker, Judith, Josua A. Decker, Christoph Römmele, Maria Kahn, Helmut Messmann, Markus Wehler, Florian Schwarz, Thomas Kroencke, and Christian Scheurig-Muenkler. 2022. "Artificial Intelligence-Based Detection of Pneumonia in Chest Radiographs" Diagnostics 12, no. 6: 1465. https://0-doi-org.brum.beds.ac.uk/10.3390/diagnostics12061465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop