Every school needs an AI policy right now. That much is clear. But writing one that actually works, one that teachers understand, students follow, and administrators can enforce, is harder than it looks. Most schools either rush out something vague ("use AI responsibly") or go the other direction and ban it entirely. Neither approach holds up.
I have worked with schools and districts across North Carolina on this exact problem. What follows is a practical, section-by-section guide to building an AI policy that fits your school, not a theoretical exercise, but the same framework I walk administrators through in workshops.
Why most school AI policies fail
The majority of AI policies I see from schools fall into one of two traps. The first is the vague policy. It says something like "students should use AI ethically and responsibly" without defining what that means. When a teacher catches a student submitting ChatGPT output as their own essay, what does "responsibly" mean? Nobody knows, and the policy offers no guidance.
The second trap is the blanket ban. The policy says AI tools are prohibited, full stop. This feels safe until you realize that students are using AI anyway, on their phones, at home, through tools embedded in Google and Microsoft products they already use for school. A ban you cannot enforce is worse than no policy at all because it teaches students that rules are performative.
The goal is something in between: a policy that is specific enough to be actionable, flexible enough to survive the next six months of AI development, and realistic enough that people actually follow it.
Start with the 4D Framework
Before getting into the specific sections your policy needs, it helps to have a mental model for how students should think about AI use. The 4D Framework, developed through work at Anthropic and Ringling College, gives both teachers and students a shared language for evaluating when and how to use AI tools.
The four Ds are Delegation, Description, Discernment, and Diligence. Delegation asks: what am I handing off to the AI, and should I be doing this myself? Description asks: can I clearly articulate what I want the AI to do? Discernment asks: can I evaluate whether the AI output is good, accurate, and appropriate? And Diligence asks: am I putting in the effort to verify, revise, and improve what the AI gives me?
This framework is useful because it shifts the conversation from "is AI use cheating?" to "is this a thoughtful use of AI?" That distinction matters. A student who uses ChatGPT to brainstorm essay topics and then writes the essay themselves has used AI thoughtfully. A student who pastes the prompt into ChatGPT and submits whatever comes back has not. The 4D Framework helps students and teachers draw that line without needing a 40-page rulebook.
Section 1: Purpose statement
Your policy should open with a clear statement about why the school is addressing AI. This is not boilerplate, it sets the tone for everything that follows. A good purpose statement acknowledges that AI tools are already part of how people work and learn, that the school wants to prepare students to use them well, and that the policy exists to support learning rather than to punish.
Here is an example: "AI tools are increasingly part of professional and academic life. This policy exists to help our students develop the skills to use AI thoughtfully and honestly, to support our teachers in integrating AI where it enhances instruction, and to maintain the academic integrity that our community values."
What you want to avoid is a purpose statement that sounds defensive or fearful. If the opening paragraph reads like a warning label, students tune out before they reach the parts that matter.
Section 2: What is allowed
This is the section most policies skip or handle badly. Teachers and students need specific examples of acceptable AI use, not abstract principles. Be concrete.
Allowed uses might include: using AI to brainstorm ideas or generate topic lists before starting an assignment. Using AI to explain a concept in simpler terms when studying. Using AI to get feedback on a draft you have already written. Using AI to help debug code in a computer science class. Using AI to generate practice problems for test preparation.
The key principle in each of these examples is that the student is still doing the thinking. AI is a tool in the process, not a replacement for it. Make this explicit in your policy so that students understand the boundary.
Consider organizing allowed uses by subject area or assignment type if that makes sense for your school. A blanket "AI is allowed for brainstorming" does not help a math teacher figure out whether students can use Wolfram Alpha on homework sets. The more specific you can be, the fewer gray areas you leave for teachers to interpret on their own.
Section 3: What is not allowed
Just as important as what is allowed is being clear about what is not. Again, specificity matters more than length.
Prohibited uses typically include: submitting AI-generated text, code, or other work as your own without disclosure. Using AI tools during assessments, exams, or quizzes unless the teacher has explicitly permitted it. Using AI to complete assignments that are designed to assess skills the student should be developing personally. Copying AI output into collaborative work without informing group members.
Notice the pattern: the prohibited uses are situations where AI replaces the student's own learning rather than supporting it. This connects back to the 4D Framework, if a student cannot pass the Delegation test ("should I be doing this myself?"), the use is probably not allowed.
One important nuance: give teachers the authority to set their own AI permissions for specific assignments. A history teacher might allow AI-assisted research for a term paper but prohibit it for an in-class document analysis. The school-wide policy should establish the baseline, and individual teachers should be able to adjust from there.
Section 4: Disclosure requirements
This is the section that makes or breaks a policy. Even when AI use is allowed, students need to be transparent about how they used it. Define what disclosure looks like at your school so that students do not have to guess.
A good disclosure requirement might look like this: "When you use AI tools in completing an assignment, include a brief note at the end of your work describing which tool you used, what you used it for, and how you modified or built on the AI output." Some schools use a simple checkbox form. Others ask for a short paragraph. Either works, what matters is consistency.
Make the disclosure process low-friction. If it takes longer to disclose AI use than it took to use the AI, students will skip it. A single sentence at the bottom of an assignment is usually sufficient: "I used ChatGPT to generate an initial outline for this essay, which I then revised and expanded."
The goal of disclosure is not surveillance. It is building a habit of transparency that students will need throughout their careers. Frame it that way in your policy.
Section 5: Teacher guidance
A school AI policy is not just about students. Teachers need to know what they can do with AI too, and many are unsure. Your policy should explicitly address teacher use.
Common teacher uses that your policy might support include: using AI to draft lesson plans or unit outlines (to be reviewed and customized). Using AI to generate differentiated versions of reading materials for different student levels. Using AI to create quiz and test questions. Using AI to draft feedback comments on student writing, which the teacher then personalizes. Using AI to summarize research or professional development materials.
Your policy should also address what teachers should not do with AI. Entering student names, grades, or other personally identifiable information into AI tools is a privacy concern and likely violates FERPA. Using AI-generated content without reviewing it first is a quality concern. Relying on AI detection tools for disciplinary decisions is unreliable and can disproportionately flag non-native English speakers.
Give teachers permission to experiment. Many educators feel anxious about AI because they think they need to be experts before they use it. Your policy can help by making clear that teachers are encouraged to explore these tools and that the school supports their learning.
Section 6: Consequences
This is where schools often overcorrect. The instinct is to treat undisclosed AI use the same as traditional plagiarism, zero on the assignment, academic integrity hearing, notation on the transcript. That approach might make sense for egregious cases, but it fails for the much more common situation where a student used AI in a way they genuinely did not realize was off-limits.
A better approach is tiered consequences that prioritize education over punishment. A first offense might result in a conversation with the teacher and a chance to redo the assignment. A second offense might involve a meeting with the teacher and a parent or guardian. Repeated or egregious violations, like using AI during a proctored exam, would escalate to the school's existing academic integrity process.
The reasoning here is practical: AI norms are still forming. Students are figuring out the boundaries in real time, and so are adults. A policy that treats every misstep as a major infraction will create resentment and drive AI use underground rather than into the open where it can be guided.
Frame consequences around learning. The goal is not to catch students, it is to teach them how to work with AI honestly. A student who gets caught and then learns to disclose properly has actually learned something valuable.
Section 7: Review cycle
This is the section that most policies leave out entirely, and it is one of the most important. AI capabilities are changing every few months. A policy written in September may be outdated by January. Build a review cycle into the document itself.
I recommend reviewing and updating the policy every semester. That does not mean rewriting it from scratch, it means checking whether the examples still make sense, whether teachers have encountered situations the policy does not address, and whether new tools or capabilities require new guidance.
Assign someone to own this process. It could be a curriculum coordinator, a technology director, or a small committee of teachers. The point is that someone is responsible for making sure the policy stays current. A policy that was last updated 18 months ago signals to everyone that leadership is not paying attention.
Include a version date on the document. When students and teachers see "Last updated: [this semester]," it communicates that the school is actively engaged with this issue.
Common mistakes to avoid
Beyond the two traps I mentioned at the start, too vague or too restrictive, there are several other mistakes I see schools make repeatedly.
Banning AI entirely is the most common one. It does not work. Students will use AI on their personal devices regardless of what the policy says, and a ban prevents teachers from helping students learn to use these tools well. It also puts your school behind, students who graduate without AI literacy are at a disadvantage.
Making the policy too long is another frequent problem. A 15-page AI policy will not be read by students, and most teachers will skim it at best. Aim for two to three pages. If it cannot fit in that space, you are probably trying to anticipate every possible scenario instead of establishing clear principles.
Not involving teachers in writing the policy is a mistake I see at the district level especially. Administrators draft the policy, distribute it to teachers, and expect compliance. The problem is that teachers are the ones who have to interpret and enforce it daily. They know which scenarios come up most often, which gray areas cause confusion, and which rules are realistic for their classrooms. Include them in the drafting process.
Relying on AI detection software is the last mistake worth mentioning here. These tools have high false-positive rates and can penalize students whose writing style happens to resemble AI output. Design better assignments instead, assignments that require personal reflection, in-class components, or iterative drafts are much harder to complete with AI alone.
The North Carolina situation
If you are reading this from a North Carolina school, you are navigating this in a specific context. Many NC districts have no AI policy at all. Others have informal guidelines that vary from school to school within the same district. A few have adopted policies, but they were written a year ago and have not been revisited.
The NC Department of Public Instruction has provided some general guidance, but the work of creating a usable policy still falls to individual schools and districts. That is both a challenge and an opportunity, you can build something that fits your community rather than adopting a one-size-fits-all state template.
Schools across the Triangle, the Triad, and the Charlotte area are working through this right now. If your school does not have a policy yet, you are not behind, but you should start. Every week without clear guidelines is a week where teachers and students are making it up as they go.
Get started with a free template
I have put together a free AI Classroom Policy Template that you can download and adapt for your school. It follows the structure outlined in this post and includes example language for each section. You can find it on my free resources page.
The template is a starting point, not a finished product. Take it, edit it for your school's context, run it past your teachers, and make it yours. The best AI policy is one that your community helped build and actually uses, not one that sits in a binder.