AI in schools

The Questions Schools Should Be Asking About AI (But Often Aren’t)

Conversations about AI in schools often feel stuck between urgency and uncertainty. Leaders know the topic matters, but many are unsure where to begin, who should own the conversation, or how to move forward responsibly. Rather than offering quick answers, this article focuses on the questions schools should be asking to create clarity, consistency, and thoughtful leadership around AI use.


Why AI Conversations Often Stall in Schools

In many schools, conversations about AI do not stall because leaders are uninterested or resistant. More often, they slow down because the stakes feel high and the path forward feels unclear. When new tools intersect with student services, compliance, and professional judgment, hesitation is often a sign of responsibility, not avoidance.

One common barrier is the fear of getting it wrong. Leaders worry about privacy, ethics, and unintended consequences, especially in environments where mistakes can impact students and families. Without clear examples or shared guidance, it can feel safer to pause the conversation rather than risk moving too quickly.

Another challenge is the lack of clarity around responsibility. When AI use is informal or emerging organically, it is not always clear who should be setting expectations or monitoring use. Is it a technology issue, a compliance issue, or an instructional one? When ownership is ambiguous, conversations tend to stall because no one wants to make decisions in isolation.

Finally, mixed messaging across departments can create confusion. Teachers, clinicians, and administrators may hear different perspectives about AI, ranging from encouragement to caution to silence. Without alignment, staff are left to interpret expectations on their own, which can lead to inconsistency and uncertainty.

Together, these factors make it difficult for schools to move forward with confidence. Recognizing why these conversations stall is the first step toward creating clearer, more productive dialogue around AI use.

 

Who Is Responsible for AI Oversight in Schools

Schools are already navigating technology policies, data privacy requirements, and compliance expectations. As AI tools enter everyday workflows, the question is less about whether oversight exists and more about how clearly it is defined and applied.

In practice, AI oversight typically lives at the leadership and systems level, where decisions about instructional practice, student data, and compliance are already made. This often includes district administrators, special education leadership, and teams responsible for technology, compliance, or instructional guidance. The key is not creating something entirely new, but clearly connecting AI use to existing policies and assigning responsibility for how those policies are interpreted and applied.

Oversight cannot be informal or assumed, even when policies exist. Without clear ownership, expectations can be applied inconsistently across departments or roles. Educators may receive different messages depending on who they ask, or they may be left to interpret policy language on their own. Clear oversight helps ensure that guidance is consistent, current, and aligned with how AI is actually being used in schools.

It is also important to distinguish between guidance and enforcement. Guidance explains how existing policies apply to AI use, outlines appropriate boundaries, and supports professional judgment. Enforcement exists to address clear violations, not to monitor everyday decision-making. When schools are clear about this distinction, oversight feels supportive rather than punitive, and staff are more likely to engage openly and responsibly.

 

What AI Is Being Used for in Schools Right Now

In many schools, AI use is already happening in small, practical ways. These uses tend to focus on supporting preparation, communication, and organization, rather than replacing instructional or clinical decision-making. Understanding how AI is currently being used helps leaders ground the conversation in reality and respond with guidance that reflects actual practice.

Planning and brainstorming

One of the most common uses of AI in schools is planning and brainstorming. Educators and clinicians may use AI to generate lesson ideas, activity suggestions, or different ways to approach a topic when time is limited. In these cases, AI functions as a starting point, helping staff think through options or organize initial ideas before applying their own expertise.

This type of use supports creativity and efficiency without shifting responsibility. Planning decisions, instructional alignment, and goal-setting remain firmly in human hands, with AI simply helping reduce the time it takes to get ideas on the page.

Drafting and organizing communication

AI is also being used to draft and organize communication. This often includes emails to families, internal updates, or explanations of routines and expectations. By generating a first draft, AI can help educators focus on clarity and structure, especially for messages that are routine or repetitive.

Importantly, these drafts are reviewed, edited, and personalized before being shared. Tone, accuracy, and context are still guided by professional judgment, ensuring communication remains thoughtful and appropriate.

Supporting workflow efficiency

Beyond planning and communication, AI is sometimes used to support workflow efficiency. This might involve organizing notes, summarizing information for internal use, or turning informal lists into clearer outlines or checklists. These uses help streamline administrative tasks without introducing new content or decisions.

When used this way, AI supports organization rather than outcomes. It helps educators manage time and cognitive load while keeping responsibility for decisions, documentation, and student services exactly where it belongs.


What Data Should Never Be Entered Into AI Tools

When using AI in school settings, clear data boundaries are essential. Certain types of information should always remain off limits to protect student privacy and maintain compliance.

  • Student names and identifying information
    This includes full names, initials linked to identifiable details, student ID numbers, dates of birth, or any combination of information that could reasonably identify a student. Even partial details can become identifying when combined.

  • IEP content and evaluation data
    Individualized plans, assessment results, and evaluation reports contain sensitive information about student needs and services. AI tools should not be used to draft, summarize, analyze, or interpret this material.

  • Session notes tied to individual students
    Notes or observations connected to specific students should remain within secure, approved systems. While AI may support general organization or writing clarity, student-specific documentation should never be entered.

Keeping these boundaries clear allows schools to use AI for planning and organization without compromising privacy, trust, or compliance.


How Professional Judgment Fits Into AI Use

Professional judgment remains essential whenever AI is used in schools. No matter how advanced a tool may seem, human review is a non-negotiable part of responsible use. AI can generate suggestions, organize information, or help draft language, but it does not understand students, context, or nuance. Every output must be reviewed, revised, and approved by a qualified professional before it is used in any educational or clinical setting.

This review is not a formality. Educators and clinicians bring training, experience, and contextual understanding that AI cannot replicate. They understand individual student needs, classroom dynamics, and the broader systems in which decisions are made. Human review ensures that AI-supported work aligns with instructional goals, ethical standards, and the realities of each learning environment.

AI also cannot make instructional or clinical decisions. It cannot determine services, interpret progress, adjust goals, or respond to complex situations that require professional judgment. These decisions depend on observation, relationship-building, and expertise developed over time. Relying on AI for decision-making would remove critical context and introduce unnecessary risk.

When AI is used as a support rather than a substitute, professional judgment remains at the center of the work. Clear expectations around human review and decision-making help ensure that AI strengthens practice instead of undermining it.

How AI Use Intersects With Special Education Compliance

AI can be a great support in special education when it is used thoughtfully and within clear boundaries. The volume of documentation, communication, and planning required in special education is significant, and tools that help organize thinking or streamline drafting can ease some of that burden. At the same time, special education operates within a highly regulated framework, which means AI use must always align with existing compliance expectations.

Documentation and service delivery are central to this conversation. Special education records, progress reporting, and service decisions are governed by specific requirements designed to protect students and ensure appropriate support. AI can assist with organizing notes or improving clarity in drafts, but it cannot replace the processes used to determine services, monitor progress, or document delivery. All records must accurately reflect what occurred, who provided services, and how decisions were made. Human review and professional judgment remain essential.

Clear guidance also matters because inconsistency creates risk. Without shared expectations, AI use can vary widely across teams or roles. One educator may avoid it entirely out of caution, while another may use it more freely without realizing where boundaries should exist. This uneven use can lead to confusion, gaps in documentation, or misalignment with established procedures.

When schools provide clear guidance around appropriate AI use in special education, they reduce uncertainty for staff and protect students at the same time. Thoughtful, consistent practices help ensure that AI supports compliance rather than complicating it.

 

How Schools Can Avoid Widening Inequities With AI

As AI tools become more visible in schools, equity needs to be part of the conversation from the start. Without intentional planning, differences in access, training, and comfort can create uneven experiences for both staff and students. Schools that address these issues early are better positioned to use AI in ways that support, rather than divide, their communities.

Uneven access to tools and training is one of the most common challenges. Some staff may have access to approved tools, training opportunities, or time to explore new resources, while others do not. When access varies, so does confidence and consistency. Schools can reduce this gap by clearly identifying which tools are appropriate, ensuring access is equitable across roles, and providing shared training opportunities that reflect how AI is actually being used in practice.

Differences in staff comfort and confidence also play a role. Not every educator or clinician approaches new technology in the same way. Some may feel eager to experiment, while others may feel hesitant or concerned about making mistakes. Supportive training, clear examples, and open conversation can help normalize learning curves and reduce anxiety. When staff feel supported rather than judged, they are more likely to engage thoughtfully.

Avoiding inequities does not mean requiring uniform use. It means creating conditions where all staff have access to information, guidance, and support. When expectations are clear and resources are shared, AI can be used responsibly without reinforcing existing gaps.

 

What AI Training Should Look Like (and What It Shouldn’t)

As schools think about AI use, training plays a critical role in shaping how tools are actually used. The most effective training supports educators and clinicians in making informed decisions, rather than making them feel monitored or constrained. When training is framed the wrong way, it can discourage honest questions and push AI use out of sight instead of guiding it responsibly.

Supportive training focuses on understanding, not policing. It creates space for staff to learn what AI can and cannot do, where boundaries exist, and why those boundaries matter. This type of training acknowledges that educators are professionals who want to do the right thing, and it equips them with the information they need to make thoughtful choices. Policing, on the other hand, tends to emphasize surveillance or consequences, which can shut down conversation and increase fear rather than clarity.

Clear examples are just as important as clear rules. Vague guidance often leaves staff guessing how policies apply to real situations. Concrete examples of appropriate and inappropriate use help bridge that gap. Seeing how AI can be used for planning or drafting, and where it should not be used at all, makes expectations easier to understand and apply consistently.

When training combines supportive messaging with practical examples, it builds confidence and trust. Staff are more likely to engage openly, ask questions, and use AI in ways that align with school values and compliance expectations.

 

What Responsible AI Use Looks Like in Practice

Responsible AI use in schools is less about specific tools and more about how expectations are set and supported. In practice, it tends to share a few common features that help protect students, support staff, and reduce risk.

  • Clear boundaries around appropriate use
    Staff understand what AI can be used for and what is off limits. Planning, brainstorming, and drafting may be appropriate, while student-identifiable data and decision-making are not. These boundaries are stated plainly and reinforced consistently.

  • Shared expectations across roles and teams
    Teachers, clinicians, and administrators operate from the same understanding of responsible use. Expectations are not left to individual interpretation or passed informally between teams. This consistency reduces confusion and supports collaboration.

  • Human review built into every use case
    AI-generated content is always reviewed and approved by a professional before it is used. Human judgment remains central, ensuring that outputs align with instructional goals, ethical standards, and student needs.

  • Oversight without surveillance
    Oversight focuses on guidance, support, and clarity rather than monitoring individual behavior. Schools set expectations and provide support without creating a culture of surveillance or fear. When issues arise, they are addressed thoughtfully and constructively.

  • Ongoing conversation, not one-time decisions
    Responsible AI use is revisited as tools evolve and practices change. Schools create space for continued dialogue, reflection, and adjustment rather than treating AI guidance as static.

Together, these practices create an environment where AI supports educators and clinicians without undermining trust or professional judgment.

 

A Thoughtful Approach to AI in Schools

At Lighthouse, our approach to AI conversations with schools is rooted in thoughtfulness and care. We see AI as a tool that can support educators and clinicians when used intentionally, but never as a substitute for professional judgment. Our role is to help schools think through AI use in ways that remain student-centered, aligned with existing expectations, and mindful of compliance. That means focusing on clarity, boundaries, and partnership rather than pushing quick solutions or one-size-fits-all answers.

Strong leadership around AI does not require having everything figured out. It requires asking the right questions, creating space for thoughtful discussion, and building systems that support safe, consistent practice. When schools focus on questions rather than rushing toward decisions, they create clearer guidance, reduce risk, and support educators in using tools responsibly.

AI, AI in Education, AI in Schools, AI use in schools, Special Education