Introduction: Patients may have high expectations regarding dental implants based on the source
of their information, which can lead to challenges in clinical communication. This study aims to
evaluate the quality of responses provided by Chat Generative Pre-trained Transformer
(ChatGPT, OpenAI, USA), an artificial intelligence program, to patient questions in the field of
dental implantology. Materials and Methods: This study was prospectively designed as a crosssectional
study. Frequently asked questions by patients about general information on dental
implantology (Part 1) and dental implant brands (Part 2) were posed to the ChatGPT program.
Responses were independently assessed by oral and maxillofacial surgeons (Group 1: n=10),
periodontologists (Group 2: n=10), prosthodontists (Group 3: n=10), and general dentists (Group
4: n=10) using the Global Quality Scale (GSQ). Results: There were a total of 60 questions, with
30 questions in each part. Participants in the study were evenly distributed by gender (50%
female, 50% male) with a mean age of 32.6±4.07 years. The mean years of experience were
8.2±3.12 years. There were no significant differences in mean age, gender, and years of
experience among the groups (p>0.05). The overall mean GSQ score was 3.87±0.29. Part 1 had a
mean score of 3.9±0.35, and Part 2 had a mean score of 3.85±0.29, with no statistically
significant difference (p>0.05). Conclusion: The artificial intelligence platform may contribute to
the additional education of patients in the field of dental implantology and aid in understanding
treatment procedures. However, it is concerning that ChatGPT may exhibit bias regarding dental
implant brands, which could impact patient guidance.