Everyone from your kids to your vet is using AI. But how should you use it in your dental practice? In this episode, Kirk Behrendt brings back Travis Wentworth, cybersecurity expert from Intelligence Quest, to share some best practices that you and your dental team should know when using AI. To learn about the precautions to take to keep your patients and practice safe, listen to Episode 986 of The Best Practices Show!
Learn More About Travis:
Learn More About ACT Dental:
More Helpful Links for a Better Practice & a Better Life:
Main Takeaways:
Quotes:
“Number one is, don't put any HIPAA-compliant information into these [AI] models. They are utilizing your data, and any data that you put into them is that company's in perpetuity. They can use it to train the model to make it better to answer to you, but they can also use it to train the model to make it better to answer anybody else. That data then lives in their infrastructure to some extent. So, that's important to know.” (8:48—9:11) -Travis
“Number two is the security wall. So, one of the things that you may not have noticed when I was describing what I was putting into ChatGPT, for example, to give it context — that's what it's called, giving context to these models — is I put in publicly available information: who the practitioner is at the office, where the office is located. Honestly, our core values are on our website. Those are publicly available. So, what I'm doing is now I'm giving it context as to how I want to use them and what my personal preference is in terms of warmth and things like this. But you'll notice that almost everything that I put into these models and that I'm talking to you about putting in, especially for these chat models and response models like this, is all publicly available information in the context in which I need.” (13:22—14:01) -Travis
“You need to be aware of what you're using [AI] for. So, there are products that do this, and then you have to know which products apply to your situation. If you are working in a medical office, then these have to be HIPAA-compliant models. And they will typically tell you that. Then, you're like, ‘Well, Travis, how do these models become HIPAA-compliant? They're run on hardware and they're keeping your data.’ Well, these models are built. So, what they do is they use Llama, or they use some of these other open-source models. They train them independently on their own stuff and they self-host all of their own hardware. So, they say, ‘This is our hardware. These are our tokens. We don't store this personal information, and we have provable audits to show that this is a HIPAA compliant infrastructure.’ So, you have to be aware and look for that auditing information. Or maybe they don't publish the auditing information. But look for the notation on their software.” (14:26—15:23) -Travis
“How much are our physical devices listening to your conversation? Then, number two is, what is AI doing with it on the back end? Your Alexa, these are machine learning algorithms. The difference between machine learning and AI is not that much. AI is basically an evolution of this stuff. Now, I'm sure machine learning and neural network PhDs would tell you there's a huge difference. But realistically, these things are learning over time. So, the first part is, how much are these things listening to you? I don't have a great answer because even as cyber security experts, what a lot of us will do is — and I have done this experiment as well —you'll take a network sniffing utility, basically, and you'll listen to see, what is this Alexa-enabled device sending when nothing is happening on it? And I will tell you — it's sending stuff. I don't know what it is because this is all data back to Amazon or whatever. But things are being sent and recorded. Cyber security professionals have already proven this. So, even though folks will say, ‘Oh, your stuff isn't supposed to be listening to you,’ we all know, colloquially, that when my wife talks about buying new UGG boots or something that three or five minutes later, on my Instagram feed, is UGG boots.” (17:27—18:49) -Travis
“You asked a question that I haven't thought a lot about, which is not common. So, I'll give you credit for this, which is, utilizing Alexa or some of these enabled utilities in operatories, for example. So, it's not sending that request to an AI model. Especially with these simple requests, those are a little bit more like machine learning. Basically, they're taking a request type and saying, ‘Oh, HDMI input. Change HDMI input.’ It’s a request and response, as opposed to, ‘What do you think of this synthesizing information?’ Those are two different things. So, it sounds like that's an acceptable application to say, ‘Hey, change an input in an operatory.’ One, I think it's more sanitary. You're not touching anything or doing anything. It's nice, and it improves the workflow. I don't think of it as, that utility is listening to and uploading personal information or HIPAA-compliant info. Now, I could be wrong on this. I don't have a lot of data to support my thoughts on exactly what is being recorded. But I will tell you, those larger companies are concerned with data breaches themselves. So, they are probably not aggregating PII data and HIPAA-compliant data, if it's sitting in an operatory somewhere, as a function of their own security protocols. What that's called is scrubbing data. So, they will scrub that data and filter that data for the most important things to them, which is like advertising info or cues at the start, ‘Alexa, start,’ and then you go do a thing.” (18:54—20:29) -Travis
“You talked about your ENT using [AI]. Our veterinary office uses it. Our dog had a bunch of vet appointments over 2025, and they were always using it in there. So, the one thing I want to emphasize — and you've noticed I haven't said anything about not using AI-enabled products. It is a wonderful tool, and it is continuing to improve. So, you asked where [AI is] going. It is going to be used. We are going to continue to use this. To be frank, your ENT — this is a benefit to you. You're like, ‘Oh, that was a great summary. Even I didn't remember what you talked about and what questions I asked.’ Do you ever get out of a dental appointment or an ENT, and you're like, ‘I know I asked about that, and it is important to me, but I don't remember the response. There was so much going on’? So, this is really valuable.” (21:25—22:08) -Travis
“Go out to ChatGPT and say, ‘Show me the adoption curves for different technologies,’ and list the phone, the TV, the cell phone, the internet, and then AI. What happens is as this technology becomes easier and easier to adopt, the curves become more and more exponential. So, people adopt them immediately. Like, imagine how many people are using ChatGPT right now, and they didn't know what it was 24 months ago. Imagine your grandfather — me and you, Kirk, who have older grandparents. Imagine them adopting the smartphone 24 months after the smartphone became publicly available. It would have been insane. So, be aware that the adoption curve is very steep in terms of who is adopting it. Everyone is adopting it. Which means that if you don't, you're going to be behind.” (22:59—23:46) -Travis
“You're like, ‘I don't know if I need [AI]. Does it really help with productivity?’ Number one, it does. It takes the mental burden for the mundane. That's one of the best things that AI is doing right now for us. It takes the mental burden of the mundane off of our plate. If you are a high producer and a high achiever — the dentists that are in the ACT program fall into this category. You and I fall into this category — if I can take the burden from those mundane mental tasks off of my plate, that leaves more time for me to be productive in the spaces that I am most valuable in.” (23:48—24:20) -Travis
“You should have . . . policies in place for use [of AI]. So, how people are utilizing that infrastructure. And you're like, ‘Well, we could have some tech policies that say you can't go out to certain websites,’ and this and that. But people are going to people. They're going to go around these things. They're going to use them on their phone. They're going to do X, Y, and Z. So, my first thing is, beware of HIPAA compliance. Don't put passwords in this and that. Okay, that's the people in your infrastructure . . . Make sure that we have policies in place that relate to security and use so that everyone in your infrastructure, specifically your front office team and the people who are utilizing your PCs for more than just tooth numbers and stuff — you should have policies in place as to how to use this, and you should let them know what the acceptable use cases are and what aren't. And know that this is going to change over time. So, you should set them now and say, ‘Hey, you can use ChatGPT to help you format emails, to give you ideas about giveaways at our office, and to do X, Y, and Z. These are the things that are acceptable, and I listed a couple here,’ and make sure that everyone is on the same page and make sure that they're adhering to it. Policies are only as good as the amount of people that adhere to them. And they need to be clear.” (28:33—29:53) -Travis
“Know that you should start utilizing this if you're not. So, before I go “security” and make you scared, know that this is a useful tool, and it is an enabling tool, and it is going to continue to be adopted in the space, and it's going to make you better. So, if you want to be better every day, just like we want to be better every day here at ACT, then utilizing [AI] is something that is important.” (30:15—30:37) -Travis
“You're going to give more control over your infrastructure to AI over time. You are just going to do it, and you have to continue to be comfortable with that and be careful with it. So, your AI chatbots are going to be able to schedule and put people on your schedule, physically in your chair. It's going to happen eventually. Maybe you don't want it in your office, and maybe you're going to retire in 10 years before that happens. But for all of us who are going to be here for a decade or more, it's going to happen. It's going to process credit cards for you, it's going to accept or decline, it's going to offer payment plans, and going to check credit, and do all these other things. And maybe they will be independent apps, but it's going to start to get more integrated. So, just know that this is coming, and we need to set the baseline.” (31:21—32:02) -Travis
Snippets:
0:00 Introduction.
1:47 Travis’s background.
4:18 The current state of AI.
6:01 AI security basics and best practices.
10:46 Uses for ChatGPT in a dental practice.
12:24 Security practices when using AI.
14:02 Other things to be aware of.
16:35 Are our devices always spying on us?
21:02 The future of AI.
28:16 Have policies in place for AI use.
29:58 Last thoughts.
32:17 More about Intelligence Quest and how to get in touch.
Travis Wentworth Bio:
Dr. Travis Wentworth has been training students in engineering, networking, and cybersecurity for over a decade. He received his PhD in engineering from the University of Kansas in 2015 and completed a Postdoctoral Research fellowship at the University of Chalmers in Gothenburg, Sweden. While there, he was part of the world-renowned research group led by Dr. Louise Olsson and had the privilege to work with the European Union, Swedish Research Council, Volvo, and Chalmers University.
As a researcher, instructor, and consultant, Travis has presented his technical content to far-reaching corners of the globe including China, Germany, and Sweden, to name a few. Returning to the United States in 2017, he narrowed his emphasis to cybersecurity and networking training.
Travis has a diverse background with a proclivity in the acquisition and analysis of public and proprietary data. He is a published author in numerous peer-reviewed journals for computer modeling and catalysis and is well-versed in programming, networking, data acquisition, and cybersecurity.