Journal of Endodontics, cilt.51, sa.11, ss.1675-1684, 2025 (SCI-Expanded, Scopus)
Introduction: This study aims to evaluate and compare the performance of three advanced chatbots—ChatGPT-4 Omni (ChatGPT-4o), DeepSeek, and Gemini Advanced—on answering questions related to pulp therapies for immature permanent teeth. The primary outcomes assessed were accuracy, completeness, and readability, while secondary outcomes focused on response time and potential correlations between these parameters. Methods: A total of 21 questions were developed based on clinical resources provided by the American Association of Endodontists, including position statements, clinical considerations, and treatment options guides, and assessed by three experienced pediatric dentists and three endodontists. Accuracy and completeness scores, as well as response times, were recorded, and readability was evaluated using Flesch Kincaid Reading Ease Score, Flesch Kincaid Grade Level, Gunning Fog Score, SMOG Index, and Coleman Liau Index. Results: Results revealed significant differences in accuracy (P < .05) and completeness (P < .05) scores among the chatbots, with ChatGPT-4o and DeepSeek outperforming Gemini Advanced in both categories. Significant differences in response times were also observed, with Gemini Advanced providing the quickest responses (P < .001). Additionally, correlations were found between accuracy and completeness scores (ρ: .719, P < .001), while response time showed a positive correlation with completeness (ρ: .144, P < .05). No significant correlation was found between accuracy and readability (P > .05). Conclusions: ChatGPT-4o and DeepSeek demonstrated superior performance in terms of accuracy and completeness when compared to Gemini Advanced. Regarding readability, DeepSeek scored the highest, while ChatGPT-4o showed the lowest. These findings highlight the importance of considering both the quality and readability of artificial intelligence-driven responses, in addition to response time, in clinical applications.