Medical offices across the Triangle are hearing the same thing everyone else is hearing: AI can save you time. And it can. But healthcare has a constraint that most industries do not — patient privacy is not optional. It is federal law.
This post is not a HIPAA compliance guide. It is practical advice for the office managers, front desk staff, billing teams, and practice administrators who are wondering whether they can actually use tools like ChatGPT or Claude at work without putting the practice at risk.
The short answer: yes, but only if you know where the line is.
The real opportunity is admin work, not clinical decisions
Most of the hype around AI in healthcare focuses on diagnostics, imaging analysis, and clinical decision support. That is not what this post is about. Those applications require FDA-cleared software, clinical validation, and integration with your EHR. They are years away from being practical for most independent practices.
The immediate opportunity is much simpler: reducing the admin burden that is burning out your staff. The average medical office employee spends hours every week writing emails, drafting policy documents, creating patient communication templates, summarizing meeting notes, and answering the same questions over and over.
That is where AI helps right now. Not in the exam room — in the back office.
What never goes into a public AI tool
Before we talk about what you can do, let us be direct about what you cannot. Do not enter any of the following into ChatGPT, Claude, Google Gemini, or any other public AI tool:
Patient names. Dates of birth. Social Security numbers. Medical record numbers. Phone numbers or email addresses tied to a specific patient. Insurance ID numbers. Diagnosis codes tied to an identifiable person. Clinical notes. Lab results. Any combination of information that could identify a specific patient.
This is not about being overly cautious. This is protected health information (PHI) under HIPAA. Entering it into a public AI tool is a potential breach, regardless of what the tool's privacy policy says. Even if the company says they do not train on your data, you have still transmitted PHI to a third party without a Business Associate Agreement in place.
If it can be traced back to a patient, it does not go in. Full stop.
The de-identification test
Here is a simple mental check to run before you type anything into an AI tool: if someone read what you just entered, could they figure out which patient you are talking about?
If the answer is yes — or even maybe — do not enter it.
This applies to combinations of information, not just obvious identifiers. A 67-year-old male in Cary with a rare autoimmune condition seen on Tuesday — that might be enough to identify someone, even without a name attached. The more specific the details, the higher the risk.
The safe approach: strip everything down to the generic version before it touches an AI tool. Instead of real details, use placeholders. Instead of actual dates, use "[DATE]." Instead of specific conditions, use general categories like "chronic condition" or "post-surgical follow-up."
Safe use cases your team can start with this week
Here are things that medical office staff are already using AI for without any PHI risk, because no patient information is involved:
Drafting appointment reminder templates. You are not writing a reminder for a specific patient. You are creating a general template: "Dear [Patient Name], this is a reminder of your upcoming appointment on [Date] at [Time]. Please arrive 15 minutes early and bring your insurance card." AI is excellent at generating these in different tones — formal, friendly, brief, detailed.
Writing FAQ responses for your website or patient portal. "What should I bring to my first appointment?" "Do you accept my insurance?" "What is your cancellation policy?" These questions have nothing to do with specific patients. AI can draft clear, professional answers in minutes.
Creating policy and procedure document drafts. Need to update your office's no-show policy? Your new patient intake process? Your employee handbook section on phone etiquette? Give AI the key points you want to cover and let it produce a first draft you can edit.
Summarizing de-identified meeting notes. After a staff meeting, you can summarize the key decisions and action items using AI — as long as no patient names or cases are referenced. "We discussed updating the referral process for cardiology" is fine. "We discussed Mrs. Johnson's referral to Dr. Smith" is not.
Drafting job postings and HR communications. Hiring a new medical assistant? AI can write the job description, interview questions, and onboarding checklist.
Unsafe use cases that seem tempting
Some uses of AI feel productive but cross the line. Here are the ones I see most often when training medical office teams:
Entering patient records or chart notes to get a summary. Even if you are just trying to save time, pasting a patient's chart notes into ChatGPT is transmitting PHI to a third party. Do not do this, even if you think the note is "not that sensitive."
Using AI for diagnosis or clinical decision support. Public AI tools are not FDA-cleared medical devices. They are not trained on your patient population. They hallucinate. A front desk employee should never be using ChatGPT to figure out what a patient's symptoms might mean, and a provider should not be relying on it for clinical decisions without purpose-built, validated tools.
Sharing clinical notes to get documentation help. If your provider needs help with note templates or documentation workflows, the right tool is one that integrates with your EHR and has a BAA in place — not a browser tab with ChatGPT open.
Pasting insurance EOBs or claim details. These contain patient identifiers. Even if you are just trying to understand a denial reason, the patient information on that document makes it PHI.
A practical example: templates vs. specific communications
This is the distinction that clicks for most people in my training sessions. Here is the difference between safe and unsafe in one example:
Safe: "Write a follow-up email template for patients who had a routine wellness visit. The tone should be warm and professional. Include a reminder to schedule their next annual visit and a note about accessing their patient portal for lab results."
Unsafe: "Write a follow-up email for John Smith who came in on March 3rd for his diabetes check-up. His A1C was 7.2 and Dr. Patel wants him to follow up in three months."
The first version produces a reusable template with no PHI. The second version contains a patient name, visit date, diagnosis, lab result, and provider name — all protected information.
The workflow that works: use AI to create the template, then fill in the patient-specific details yourself (or through your EHR's mail merge). AI builds the structure. You add the specifics. Patient data never leaves your system.
What about enterprise and business tiers?
Tools like ChatGPT, Claude, and Google Gemini all offer business or enterprise tiers with stronger privacy protections. Some key differences from the free or personal versions:
Business tiers typically do not use your inputs to train the model. They offer better data handling policies, admin controls, and audit logs. Some providers will sign a Business Associate Agreement (BAA), which is a requirement for handling PHI under HIPAA.
However — and this is important — having a business tier does not automatically make it safe to enter PHI. You need a signed BAA specific to your practice. You need to verify that the tool's data handling meets HIPAA requirements. And you need staff training so people understand what the tool is and is not approved for.
If your practice is considering an enterprise AI tool for clinical documentation or anything involving patient data, that is a conversation for your compliance officer and IT team, not something to set up over lunch.
For the admin use cases I described above — templates, FAQs, policies, de-identified summaries — the standard paid tiers of ChatGPT or Claude are fine because you are not entering PHI in the first place.
Getting your team on the same page
The biggest risk in most practices is not that someone intentionally misuses AI. It is that well-meaning staff members start using these tools on their own without any guidance about what is and is not okay.
A billing coordinator discovers that ChatGPT can help explain denial codes and starts pasting in EOBs. A medical assistant uses it to draft a patient letter and includes the patient's name and diagnosis. An office manager uploads a spreadsheet of patient contact information to help draft a recall campaign.
None of these people are trying to cause a breach. They are trying to be more efficient. The problem is that nobody told them where the line is.
The fix is simple: train your team. A one-hour session covering what AI can and cannot be used for, with real examples from medical office workflows, prevents these mistakes before they happen. It is dramatically cheaper than a breach investigation.
A note for Triangle-area practices
If you are running a medical practice in Raleigh, Durham, Chapel Hill, Cary, or anywhere in the Triangle, you are in an area where AI adoption is moving fast. Your staff are probably already using these tools at home. Some may already be using them at work without telling you.
I work with medical and dental practices across the Triangle to train their admin teams on AI — what is safe, what is not, and how to actually save time without creating risk. These are in-person sessions with your actual team, using examples from your actual workflows. Not a webinar. Not a generic slide deck.
The goal is not to scare people away from AI. It is to give them clear, practical boundaries so they can use it confidently.
Where to start today
If you want to start using AI in your medical office this week, here is a simple plan:
First, pick one admin task that involves no patient information. Appointment reminder templates, FAQ answers for your website, or a policy document draft are all good starting points.
Second, try it with your team. Have two or three staff members use ChatGPT or Claude to draft the same document. Compare the results. Talk about what worked and what did not.
Third, write down your practice's AI policy. It does not have to be long. "We use AI tools for [these tasks]. We never enter patient information into AI tools. All AI-generated content is reviewed by a human before use." That covers most of what you need to get started.
Fourth, get your team trained. Whether you do it yourself or bring someone in, make sure everyone who touches a keyboard understands the rules. One hour of training now prevents months of problems later.
AI is not going away, and the practices that figure out how to use it safely and well are going to have a real advantage — not in clinical care, but in the operational efficiency that makes clinical care possible.