Back to Explorer
Research PaperResearchia:202602.24066[Computer Science > Peer Reviewed]

A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians

H. Takita

Abstract

While generative artificial intelligence (AI) has shown potential in medical diagnostics, comprehensive evaluation of its diagnostic performance and comparison with physicians has not been extensively explored. We conducted a systematic review and meta-analysis of studies validating generative AI models for diagnostic tasks published between June 2018 and June 2024. Analysis of 83 studies revealed an overall diagnostic accuracy of 52.1%. No significant performance difference was found between AI models and physicians overall (p = 0.10) or non-expert physicians (p = 0.93). However, AI models performed significantly worse than expert physicians (p = 0.007). Several models demonstrated slightly higher performance compared to non-experts, although the differences were not significant. Generative AI demonstrates promising diagnostic capabilities with accuracy varying by model. Although it has not yet achieved expert-level reliability, these findings suggest potential for enhancing healthcare delivery and medical education when implemented with appropriate understanding of its limitations.


Source: Semantic Scholar - npj Digital Medicine (66 citations) PDF: https://doi.org/10.1038/s41746-025-01543-z Original Link: https://www.semanticscholar.org/paper/b93c3f0763e3e1768b4448aea9f0bb80492eeda9

Submission:2/24/2026
Comments:0 comments
Subjects:Peer Reviewed; Computer Science
Was this helpful?

Discussion (0)

Please sign in to join the discussion.

No comments yet. Be the first to share your thoughts!

A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians | Researchia | Researchia