© 2024 MJH Life Sciences™ and Dental Products Report. All rights reserved.
The dental industry needs to find solutions for transparency, standardization, and ethical data collection for AI to reach its full potential.
It’s no secret that artificial intelligence (AI) permeates all aspects of the dental practice, from diagnostics and clinical approaches to marketing and practice management software and support. Although the concept isn’t foreign, understanding how AI works—and the science and technology that make it possible—is something many individuals don’t fully comprehend.
Many practitioners question why they need to understand the inner workings of AI. After all, you don’t have to be a mechanic or know what’s under the hood to know a car starts when you turn the key in the ignition. So why is understanding the ins and outs of AI any different?
When it comes to AI, understanding what makes it work matters. Data make it work. AI functions by combining algorithms, iterative processing, and lots of data to allow the software to detect patterns and learn from those data. Ultimately, your AI is only as good as the data it is analyzing.
“AI is very good at doing repetitive tasks, but when it gets hit with something that’s out of the ordinary, it doesn’t know what to do,” says Margaret Scarlett, DMD, chief science and technology officer for Digital Transformation Partners. “If the AI sees data that it hasn’t seen before, it can get confused and will not be able to perform with the outlier data.”
Research backs this up. A 2019 study reported that although many algorithms show incredible diagnostic accuracy with a particular data set, they have worse performance than clinicians on unrelated data sets.1 Because the AI is only as good as the data input, clinicians need to understand what those data are and whether they are relevant to the task at hand or their patient base. And that’s where it becomes complicated.
How Data Are Measured
The first step to evaluating data is understanding how they are measured, and currently, they are not regulated across the board. Language itself is the first barrier the industry is running up against. Some companies have statistics for the efficacy of their AI that does caries detection—but on what level?
For example, within the tooth are 3 layers. Dentists talk a lot about dentin but not so much about pulp or enamel. So when data are released about caries detection, they often focus on dentin. If the data being fed into the AI program are focused on cavitation in dentin, the technology isn’t going to properly detect issues on enamel. This can present problems if dentists don’t understand exactly what is being measured and evaluated by the AI.
“We have to decide what system of measurement we are using to define things,” Dr Scarlett says. “Take the metric system as an analogy. Are we going to use metric or measure in inches? Which system are we going to use? I’m already seeing some problems with nomenclature when it comes to AI capabilities.”
But who decides how the definitions are determined? Currently, no one decides. This leaves a hole in AI efficacy and makes it impossible to compare software accurately. You may be comparing apples to bananas and never know it.
“AI is only programmed to do what it’s programmed to do, and it can’t take into account other factors,” Dr Scarlett says. “It doesn’t take into account real-world data, and unless we have a standard data set that everyone can use, maintained by an independent body or group, it won’t be uniform. That’s what we need, and we don’t have that.”
This standard data set needs to be accompanied by greater transparency about what it entails. In the context of AI-assisted medical devices, the US Food and Drug Administration (FDA) recently defined transparency as the degree to which appropriate information about the device—intended use, development, performance, and logic when available—is clearly communicated to stakeholders.2 This raises the important question of transparency: How is this important information being conveyed, and is it readily available to dentists at all?
Transparency
Without an independent body monitoring the data, samples for AI input are often skewed, unbeknownst to the clinician. Many companies pull images and data from a small group of dentists or practitioners outside the United States or use insurance claims data. This limits the pool of information for the AI to process to a specific demographic, which may not be transparent to a potential user.
“It’s easier and cheaper to get claims data,” Dr Scarlett says. “But claims data [are] skewed data sets, because you’re talking about a certain demographic, [patients who] have a job and access to dental care regularly. If we have only claims data, then we only have a certain demographic, and we’re going to be repeating mistakes of the current data: errors or judgements we’ve made because we’ve seen those patients again and again. And we’re not going to have algorithms that include data from those other populations.”
This makes it critical to maintain human oversight and to include other demographics for data sets. If an AI data set doesn’t include certain populations, dentists need to know. This leaves it to the dentist to determine whether this software will work for them. For example, if there aren’t any Native American patients in a data set but a dentist’s patient base is primarily Native American, it might not be the best technology for that particular practice. But dentists need to have this information, and it needs to be provided transparently.
“As clinicians, we need to be constantly mindful of the information presented to us to determine [whether], when, and which AI application suits our needs best,” says Lea Al Matny, DDS, MS, clinical education specialist at Carestream Dental and oral maxillofacial radiologist for SeeThru Reports. “Dentists can use the published findings to decide the appropriate treatment and help their patients achieve better outcomes. Increased transparency regarding AI data and training should be widely available for clinicians to evaluate processes.”
Currently, transparency could use a lot of work. Although many manufacturers provide resources to determine where data came from, there’s no easy way to dissect and digest them.
“Dentists are busy, and we don’t have time to dive into data all the time,” Dr Scarlett says. “We want things that work for our patients and provide good preventive care, but sometimes scientific things seem like a research project, and understanding and translating this science can be a complicated thing. So we need transparency from the companies.”
How could companies be more transparent? Dr Scarlett would love to see a simple labeling system similar to the one used on food. When you pick up a can of soup, you can easily see the amount of carbohydrates, sodium, or protein, taking the guesswork out of the process.
“I’d like to see this labeling applied to AI,” Dr Scarlett says. “If you had a label like this, something simple to read, you could compare one product [with] another. And labels could have a link to the data to determine where the data came from and when [they were] last updated to give you the full picture of what you’re getting.” Although labels seem like a simple step toward greater transparency, the dental industry still seems unsure of how to measure and regulate transparency.
The Regulatory Take
As it stands, the FDA is stymied by the transparency issue as well. “We need transparency, but we aren’t sure how that can be federally regulated,” Dr Scarlett says. “Even the FDA regulators are grappling with this issue. The guidelines that [are] set up for medical devices aren’t there. So they’re still working on the guardrails and the guidance that we usually have and even what the accepted standards are.”
To determine what needs to be done to develop these guidelines and ensure transparency, the FDA hosted a virtual workshop in October 2021 on the transparency of AI and machine learning–enabled medical devices and its role in enhancing safe and effective use. The purpose of the workshop was to determine methods of achieving transparency for users of AI-enabled medical devices, establish how transparency could improve the efficacy of these devices, and identify the types of information that manufacturers should include on labels or other public-facing information-sharing mechanisms.
“This FDA transparency workshop brought up many questions, [such as] how you notify [individuals] when the software is updated,” Dr Scarlett says. “That should be part of transparency. When you have your Norton program for antivirus, it tells you when it’s updated. But there’s no requirement right now for an update of an AI program, which muddies where the data [are] coming from or how [they’re] refreshed.”
The FDA doesn’t have answers to this yet, but some information about AI-assisted devices (such as other medical devices) seeking FDA approval is available on its website for dentists to read. Although not all companies are required to share their data or where they came from, companies that want approval and clearance for a Class II or Class III device must identify exactly where those data came from, which is something the FDA posts publicly on its website.3 Clinicians can also check this FDA database to validate company claims on clearance of its software as a medical device AI.
“The FDA releases publicly available information on approved devices in the form of a summary document that contains the performance data of the evaluation study,” Dr Al Matny says. “However, companies can also receive FDA clearance by providing information that their device is substantially equivalent to another FDA-cleared product.”
The World Health Organization (WHO) also has a say in transparency. It states that transparency should start before the design or deployment of AI technology with publication or documentation of sufficient information to facilitate meaningful conversation on the design of the AI and how it should be implemented. According to the WHO, such information “should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use.”4
Dr Al Matny agrees. “I believe this information should be publicly shared with users so they can make an informed decision on what AI application suits their needs,” she says. “This is also why postmarket surveillance and monitoring of algorithmic performance in multiple sites [are] important to ensure that the algorithms perform well and are helpful to understand unintended outcomes and biases that may go undetected in trials.”
Ethics
It goes without saying that transparency goes hand in hand with ethics. Although the positives of AI are promising, they can’t outweigh medical ethical decision-making. The WHO puts it succinctly, stating that although “new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research, and drug development, and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, must put ethics and human rights at the heart of its design, deployment, and use.”4
“The primary issue is balancing the ethics and human side of the technology with the speed and efficiency of AI,” Dr Scarlett says. “We have to balance the tension between those pushes and pulls. In addition, transparency is critical so that the [individuals] who are making decisions in dental practices on whether to buy AI know how the company collected the data, where they collected the data, and what populations are in [those] data. And if I can’t easily access [those data] and compare one product [with] another, it becomes very difficult to make the right decisions. And I do think it’s an ethical concern.”
Although approximately 100 sets of ethical guidelines exist for the use of AI across differing industries, one does not yet exist specifically for dentistry, and little investigation of AI issues that could arise in a practice setting has been done. Although ethical concerns have been acknowledged in the industry, results from a recent study found that after evaluating the ratio of studies encompassing AI-related ethical issues, there has been no increasing interest on this topic in dentistry. The study authors wrote this “confirm[s] the growing presence of AI in dentistry and highlights a current lack of information on the ethical challenges surrounding its use. In addition, the scarcity of studies sharing their code could prevent future replications.”5
This opens the door for an unknown number of risks regarding not only lack of transparency but also ethical issues about patient confidentiality and how data are collected. “There are ethical issues around informed consent. Companies should get that before they use data from a [patient],” Dr Scarlett says. “There should be some way to deidentify that [patient] with that particular x-ray, or the patient should have some way of saying, ‘Yes, I’d like to be included in the study’ or ‘No, I wouldn’t want to be in the study.’ And I have some concerns about that and how that’s being managed.”
Other ethical concerns arise when it comes to the sensitivity and specificity for diagnosing dental caries because these results are both low compared with results from other scientific industries. Is AI accurate enough to be relied upon for quality dental patient care, or is AI positioning itself to be a crutch that creates a reliance on the technology and reduces clinician investment or attention?
“[Results from] studies show that dentists make mistakes [approximately] half the time,” Dr Scarlett says. “The machines are maybe at 67% or 70% [accuracy], but is that good enough? In other parts of science, such as HIV testing, we say it needs to be 90%, 95% or even 99% accurate for an HIV test [result] in sensitivity and specificity. We should be using the same data points we use in other scientific publications, [such as] sensitivity, specificity, accuracy, and the number of [patients] in your data set. That’s important.”
To the point about accuracy and what constitutes an AI success, dentistry can take a lesson from medicine, especially the Epic Sepsis Model (ESM) and the importance of external validation. ESM was an AI-based prediction tool developed to generate automated alerts when a patient was developing sepsis.
Although study results were positive, once implemented in the real world, ESM failed to detect sepsis, a life-threatening condition, 67% of the time. According to an editorial written by authors affiliated with the University of California, Kaiser Permanente, and JAMA Internal Medicine, this monumental failure was due in part to lack of research into real-world scenarios. “The increase and growth in deployment of proprietary models has led to an underbelly of confidential, non–peer-reviewed model performance documents that may not accurately reflect real-world model performance,” the authors said, emphasizing the need for external validation to ensure high-level performance of potentially life-saving AI.6
Solutions for the Future
As AI technology continues to develop, the dental industry needs to consider transparency, standardization, and ethical data collection for AI to be successful. “It’d be great and beneficial for care if all companies came together and shared data, but it might be best to develop some sort of independent group that has the key themselves and has data that would allow different programs to compare [with] one another,” Dr Scarlett says. “Because unless we get the transparency issues correct and notifications correct, we’re going to continue to have problems down the line.”
Solutions such as labeling and an independent body to collect data sets are a great step. The industry also needs to establish standards for areas such as data collection and sharing. When it comes to data collection in particular, the industry needs to evaluate AI across a number of demographics to get real-world data that accurately encompass all populations. These data should not be limited to race or location but also represent conditions such as medical issues. These changes could allow AI to become a diagnostic tool with endless benefits for patient care.
“I heard some AI companies and dentists saying that AI is going to help dental public health,” Dr Scarlett says. “But what needs to happen is [that] oral health needs to be made a public health priority in itself. And if AI helps us to reach this goal and deliver more and better care to more [patients], then it’s done its job.”