Is AI Safe for My Office?
Plain-English rules for using AI in professional settings. What to share, what never to share, and how to stay safe.
AI tools like ChatGPT and Claude can genuinely save your team hours of work each week. But there are real privacy and security considerations you need to understand first.
This guide gives you clear, practical rules your entire team can follow. No legal jargon, no 50-page policies. Just what you need to know.
The One Rule That Covers 90% of Situations
If you would not want it on a billboard, do not put it in an AI tool.
When you type something into ChatGPT, Claude, or Gemini on a free or personal account, that information leaves your building. Even with privacy policies in place, treat public AI tools like a public conversation. Would you say this out loud in a coffee shop? If not, do not type it into AI.
What NEVER goes into a public AI tool
- Patient names, dates of birth, Social Security numbers, or any protected health information (PHI)
- Client financial data: account numbers, tax returns, investment details
- Employee personal information: SSNs, salary data, performance reviews
- Proprietary business information: trade secrets, unreleased product details, internal strategies
- Passwords, API keys, access credentials of any kind
- Legal documents under NDA or attorney-client privilege
- Customer payment information: credit card numbers, bank details
What IS safe to use AI for
These tasks are generally safe as long as you de-identify the information first:
Drafting email templates
Generic customer response templates, FAQ answers, newsletter drafts
Tip: Remove all identifying information before pasting
Meeting summaries
Paste de-identified notes to get a clean summary with action items
Tip: Replace names with roles: "the client" instead of "Jane Smith"
Writing and editing
Blog posts, marketing copy, proposals (using generic details)
Tip: Add specific client details yourself after AI generates the draft
Research and learning
Explaining regulations, industry trends, best practices
Tip: Always verify facts independently. AI can be wrong
Process documentation
Writing SOPs, checklists, training materials
Tip: Use your own review before distributing
Brainstorming and planning
Marketing ideas, event planning, project breakdowns
Tip: Great for generating options. You make the decisions
The De-Identification Test
Before you paste anything into an AI tool, ask yourself: could someone trace this back to a specific person, client, or patient?
If yes, remove or replace the identifying details:
What about paid / enterprise AI plans?
Most major AI providers offer business or enterprise tiers with stronger privacy protections:
- ChatGPT Team / Enterprise: OpenAI states your data is not used for training. SOC 2 compliant.
- Claude Team / Enterprise: Anthropic does not train on your data. Better for sensitive industries.
- Gemini for Google Workspace: Integrated with your existing Google tools. Enterprise data protections.
These are significantly better for businesses that handle sensitive data. But the rules above still apply. Even with an enterprise plan, you should have clear guidelines about what your team puts into AI.
Quick Reference for Your Team
Print this or share it in your team chat:
AI Use Guidelines | Quick Reference
SAFE
- Generic email templates
- De-identified meeting notes
- Marketing and content drafts
- Research and learning
- Process documentation
- Brainstorming ideas
NOT SAFE
- Names + personal details
- Financial account data
- Health information
- Passwords or credentials
- Confidential business data
- Anything under NDA
When in doubt, de-identify first. If you cannot de-identify it, do not use AI for it.
Want custom training for your team? Get in touch →
Want custom training for your team?
Custom AI training tailored to your staff and workflow.