The increased adoption of artificial intelligence in K-12 schools is directly correlated with heightened risks to students, including data breaches, tech-enabled harassment, AI system failures, and troubling student-technology interactions, despite the technology's potential educational benefits.
Objective
The primary goal of this study is to examine the current status of artificial intelligence (AI) use in K-12 schools during the 2024-25 school year and identify the emerging risks associated with increased AI adoption. The research aims to provide education leaders, policymakers, and communities with concrete evidence about how AI use correlates with specific harms to students, including data breaches and ransomware attacks, tech-enabled sexual harassment and bullying, AI systems that malfunction, and concerning interactions between students and AI technology. By documenting these risks, the study seeks to enable stakeholders to develop prevention and response strategies that allow schools to leverage AI's benefits while protecting students from potential harms.
Methods
The research employed comprehensive online surveys conducted between June and August 2025 with three nationally representative samples: 1,030 students in grades 9-12, 806 teachers in grades 6-12, and 1,018 parents of students in grades 6-12. Quotas were established to ensure demographic representativeness across all three audiences nationwide, with data weighted as necessary to align with key demographics including gender, race, ethnicity, and sexual orientation.
The surveys measured and tracked changes in perceptions, experiences, training, engagement, and concerns about various aspects of educational technology, including AI use in teaching and learning, student data privacy, student activity monitoring, content filtering, AI literacy, non-consensual intimate imagery (NCII), deepfakes, and related topics. This study represents the Center for Democracy & Technology's eighth poll among teachers (conducted since 2020), seventh among parents, and fifth among students, allowing for year-over-year comparisons on many metrics.
The methodology categorized respondents based on the intensity of AI use in their schools. Teachers were grouped by how many ways they used AI for school-related purposes (0-2 ways as "few to none," 3-6 ways as "some," and 7-10 ways as "many"). Students were similarly categorized based on how many ways they reported their school using AI (0-1 ways, 2-3 ways, or 4-6 ways). Parents were grouped by their frequency of having back-and-forth conversations with AI (never, occasionally up to a few times per month, or frequently at least once per week).
The survey included specific definitions for technical terms to ensure consistent understanding among respondents, including definitions for "AI for back-and-forth conversations," "deepfakes," "authentic NCII," "deepfake NCII," and "student activity monitoring." Sample sizes for subgroup analyses are noted throughout the report, with particular caution advised for interpretations based on fewer than 50 respondents.
Key Findings
The study reveals several critical findings about AI adoption and associated risks in K-12 schools:
Widespread AI Adoption with Variable Intensity: The vast majority of teachers (85%), students (86%), and parents (75%) report having used AI, though personal uses remain more common than work or school uses. Among teachers, 50% report students using AI in school, while 73% of students report personal AI use. However, there exists significant variation in the depth of AI integration—teachers and students are roughly evenly distributed across categories of "few to none," "some," and "many" ways of using AI in school settings (approximately one-third in each category).
AI Divide by Demographics: Parents with higher incomes or those living in urban and suburban areas report significantly higher rates of AI use for themselves and their children compared to rural or lower-income families. For example, 81% of parents earning $100K or more report their child has used AI, compared to only 58% of those earning under $50K, suggesting an emerging AI access divide.
Correlation Between AI Use and Risk Exposure: The research demonstrates a clear pattern: the more ways a school uses AI, the more likely teachers and students are to report experiencing various risks. Teachers using AI for many school-related reasons are significantly more likely to report their school experienced a large-scale data breach (28% versus 18% for those using AI in few to no ways). Similarly, 27% of teachers using AI extensively report having heard of deepfakes at their school, compared to only 13% of those using AI minimally.
Deepfakes and NCII Remain Significant Issues: Over one-quarter of teachers (27%) and students (31%) report hearing about deepfakes at their school during the 2024-25 school year, with deepfake NCII affecting 8% of teachers' schools and 12% of students' schools. The prevalence increases dramatically with AI use—among students whose schools use AI for many reasons, 45% have heard of deepfakes and 14% have heard of deepfake NCII. However, only 21% of teachers report their school has shared policies about addressing deepfake NCII, and fewer than 10% received guidance on specific response protocols.
Problematic Student-AI Interactions: A substantial portion of students report using AI chatbots for non-academic purposes that raise concerns about their development and well-being. Among all students, 42% report they or a friend used AI as a friend or companion, 42% to escape from real life, 42% for mental health support, and 19% for romantic relationships. These percentages increase significantly when schools use AI more extensively—59% of students whose schools use AI for many reasons report using it as a friend or companion, compared to 25% in schools with minimal AI use. Notably, 31% of students having personal conversations with AI do so using school-provided devices, tools, or software.
AI System Failures and Trust Issues: Teachers using AI extensively are substantially more likely to report negative consequences from AI systems. Among teachers using AI for many school-related reasons, 23% report an AI system failed to work as described, 13% report an AI system did not treat students fairly, and 10% report AI use damaged the school's trust with the community—all significantly higher than among teachers using AI minimally (8%, 4%, and 4% respectively).
Academic Integrity Challenges Without Proportionate Consequences: While 71% of teachers report that student AI use has created additional burdens for determining whether work is authentic, reported consequences for students using AI inappropriately have not significantly increased year-over-year. Approximately 35% of teachers report students received some form of negative consequence for proven AI cheating, but this represents only a modest change from previous years despite growing concerns.
Gaps in AI Literacy Training: Although 48% of teachers and 48% of students report receiving some AI training or guidance from their schools (with 86-87% finding it helpful), significant gaps remain in coverage of critical topics. Fewer than one-quarter of teachers received training on general AI risks (22%), and only 14% received guidance on what to do when encountering AI system issues. For students, only 17% received information about general AI risks and 14% about handling AI system problems. Moreover, teachers, students, and parents have misaligned priorities about what AI training should cover, with parents prioritizing privacy protections, students emphasizing school policies, and teachers focusing on effective AI use and detecting AI-generated work.
Concerns Versus Use Patterns: Lower levels of AI use correspond with higher levels of concern among teachers and parents. For example, 78% of teachers using AI minimally worry that AI creates additional burdens for verifying student work authenticity, compared to 48% of those using AI extensively. However, the pattern reverses for students—those whose schools use AI for many reasons express higher levels of concern (56% worry AI will treat them unfairly, versus 23% in schools with minimal AI use).
Student Activity Monitoring Remains Pervasive: Nearly 90% of teachers and 87% of students report their schools conduct student activity monitoring, with 29% of teachers reporting monitoring occurs on students' personal devices and 39% reporting it occurs outside school hours. However, only 50% of parents are aware their child's school conducts such monitoring. Common consequences persist, with 49% of students reporting they or someone they know got in trouble for something seen through monitoring, and 24% reporting a student was contacted by law enforcement due to monitoring findings.
Privacy Concerns Remain High: Parents (69%) and students (50%) remain more concerned than teachers (35%) about student data privacy and security. These concerns are elevated among those whose schools use AI more extensively and among parents of students with IEPs or 504 plans (73% versus 67% for other parents). Approximately 23% of teachers report their school experienced a large-scale data breach during the 2024-25 school year, consistent with 2023-24 levels.
Special Education AI Use Expanding Rapidly: More than half (57%) of licensed special education teachers report using AI to help develop IEPs and/or 504 plans, a dramatic increase from 39% in 2023-24. The use of AI to fully write IEPs or 504 plans increased from 23% to 30% year-over-year. Students with IEPs or 504 plans are more likely to use AI for back-and-forth conversations (73% versus 63% of other students) and to use it frequently (50% versus 30%), while simultaneously expressing heightened privacy concerns (60% versus 42% of other students).
Immigration Enforcement Data Collection: Fifty percent of teachers report their schools collect information about students' immigration status, with 23% indicating their school collects whether a student is undocumented—a practice that could potentially violate Plyler v. Doe protections. Seventeen percent of teachers report student information was shared with immigration enforcement, consistent with 2023-24 levels. Concerningly, 13% of teachers report staff members independently reported community members to immigration enforcement without being asked for information.
Gender Identity Data and Notification Policies: Only 29% of teachers report their schools collect "non-binary" as a gender category in official student records, with even fewer collecting "transgender" (21%) or "intersex" (7%). Regarding name and pronoun changes, schools vary widely in notification policies: 29% of teachers report their schools require notifying parents when students request different names or pronouns, 27% allow teacher discretion, and 23% prohibit notification without student permission. LGBTQ+ students and their parents view mandatory notification policies significantly more negatively than other families.
Implications
The findings contribute substantially to understanding AI's impact on K-12 education and inform critical policy and practice decisions:
Evidence-Based Risk Management: The study provides concrete, quantifiable evidence that increased AI adoption in schools is directly correlated with increased exposure to specific, measurable risks. This correlation enables education leaders to make informed decisions about AI implementation, develop targeted risk mitigation strategies, and allocate resources appropriately for prevention and response efforts. The clear relationship between AI use intensity and risk exposure suggests that schools cannot simply adopt AI tools without simultaneously investing in comprehensive safety, security, and oversight measures.
Need for Comprehensive AI Governance Frameworks: The disconnect between rapid AI adoption (85% of teachers using AI) and limited training coverage on risks (only 22% receiving general risk training) reveals a critical gap in school preparedness. Schools are implementing AI without adequate frameworks for governance, oversight, and accountability. The finding that 24% of teachers report AI was automatically added to tools they already use underscores the need for proactive procurement and vetting processes rather than reactive responses to vendor-driven AI integration.
Student Well-Being and Development Concerns: The widespread use of AI chatbots by students for emotional support, companionship, and even romantic relationships (42% for mental health support, 42% as friends, 19% for romantic relationships) raises fundamental questions about healthy adolescent development. The fact that 31% of these personal interactions occur through school-provided technology suggests schools may be inadvertently facilitating potentially harmful relationships between students and AI systems without adequate safeguards, monitoring, or mental health support structures.
Academic Integrity Evolution: The study reveals a complex landscape where 71% of teachers report AI creates burdens for verifying work authenticity, yet consequences for students remain relatively stable. This suggests schools are still developing effective approaches to academic integrity in the AI era. The emergence of lawsuits over AI-related discipline and the finding that 49% of students believe teachers using AI "aren't really doing their job" indicate that fundamental questions about teaching, learning, and assessment remain unresolved.
Privacy and Civil Rights Intersection: The research demonstrates how emerging AI risks compound existing privacy and civil rights concerns, particularly for vulnerable student populations. Students with disabilities, LGBTQ+ students, and immigrant students face heightened risks as schools collect more sensitive data (50% collect immigration status information, including documentation status) and implement AI systems without adequate training on potential biases or failures. The 60% privacy concern rate among students with IEPs/504 plans compared to 42% of other students signals justified apprehension about how AI systems may handle sensitive disability information.
Stakeholder Alignment Challenges: The significant misalignment between what teachers, students, and parents prioritize for AI training creates implementation challenges. Parents prioritize privacy protections (their top concern), students emphasize knowing school policies, and teachers focus on effective use and plagiarism detection. This misalignment, combined with low parental awareness (only 50% know about student activity monitoring) and minimal parental engagement (only 20% of schools asked for parent input on AI use), sets the stage for community backlash and erosion of trust.
Technology Company Accountability: The finding that AI was automatically added to 24% of teachers' existing tools, combined with widespread concerns about companies accessing student data (37% of teachers, 54% of parents concerned), highlights the need for stronger vendor accountability and transparency requirements. The correlation between increased AI use and increased data breaches (28% of high-AI-use schools versus 18% of low-use schools) suggests that rapid commercial AI integration may be outpacing security and privacy protections.
Limitations
The study acknowledges several important limitations:
Self-Reported Data and Perception Bias: All findings rely on self-reported survey data from teachers, students, and parents, which may be subject to recall bias, social desirability bias, or misunderstanding of technical terms despite provided definitions. Respondents' perceptions of whether their school uses AI or has experienced a data breach may not align with objective reality. Some respondents may be unaware of AI uses at their school, while others may incorrectly attribute non-AI systems to AI.
Correlation Versus Causation: While the study demonstrates clear correlations between increased AI use and increased risk exposure, it cannot definitively establish causation. Schools that adopt AI extensively may differ from schools that use AI minimally in ways not measured by the survey—for example, they may have larger technology budgets, more complex IT systems, or different student populations—any of which could independently contribute to risk exposure. The relationship may be bidirectional, with some risks potentially driving AI adoption (such as using AI for student monitoring) rather than resulting from it.
Snapshot in Time: The surveys were conducted between June and August 2025, capturing experiences primarily from the 2024-25 school year. Given the rapid pace of AI development and deployment in education, findings may quickly become outdated. Some AI-related risks and uses may have emerged after the survey period, and the regulatory landscape continues to evolve with new federal administration policies on issues like immigration enforcement access to school data and gender identity protections.
Sample Size Constraints for Subgroups: While the overall samples are nationally representative, some subgroup analyses involve relatively small sample sizes, particularly for specialized populations. For example, only 62 parents of LGBTQ+ students were surveyed, and some teacher subgroups (such as the 37 teachers who heard of deepfake NCII) fall below the threshold for robust statistical inference. The report appropriately notes these limitations, but readers should exercise caution when interpreting findings for smaller subgroups.
Limited Scope on Certain Issues: The study focuses on specific types of AI uses and risks but may not capture the full spectrum of AI applications in schools or all potential harms. For example, the survey measures whether schools use AI for IEP development but doesn't assess the quality of AI-generated IEPs or their impact on student outcomes. Similarly, while the study documents that students use AI chatbots for mental health support, it doesn't measure whether this use is beneficial, neutral, or harmful to their mental health.
Geographic and Demographic Generalizability: While the samples are nationally representative at the aggregate level, the study identifies an "AI divide" with lower-income and rural families reporting less AI use. This means the experiences and risks documented may not equally represent all communities. Schools in well-resourced urban and suburban areas may face different AI-related challenges than under-resourced rural schools, and the national averages may obscure these important differences.
Year-Over-Year Comparison Limitations: Some year-over-year comparisons are complicated by changes in question wording or methodology. For example, the 2024 survey asked about ChatGPT or other generative AI platforms specifically, while the 2025 survey asked more broadly about AI. These differences may affect the comparability of results across years and could account for some observed changes.
Missing Perspectives: The study does not include perspectives from school administrators, IT directors, school board members, or AI vendors, all of whom play important roles in AI adoption decisions and risk management. Additionally, the study does not capture the experiences of elementary school students (grades K-5) or their teachers, despite evidence that AI adoption is occurring at these grade levels as well.
Future Directions
The authors and the broader implications of the findings suggest several critical areas for future research:
Longitudinal Impact Studies: Future research should conduct longitudinal studies tracking the same students, teachers, and schools over multiple years to better understand the long-term effects of AI use on academic outcomes, student development, mental health, and critical thinking skills. Such studies could establish whether the correlations identified in this research reflect causal relationships and could identify which specific AI implementations and governance approaches lead to better or worse outcomes for students.
Intervention and Mitigation Research: Given the clear correlation between AI use intensity and risk exposure, research should evaluate the effectiveness of specific interventions designed to mitigate these risks. Studies could compare different approaches to AI governance, training programs, vendor vetting processes, privacy protections, and incident response protocols to identify evidence-based best practices. Research should also examine whether certain AI training topics or formats are more effective at preventing harms than others.
Student Development and Well-Being Studies: The finding that substantial percentages of students use AI chatbots for emotional support, companionship, and even romantic relationships warrants dedicated developmental psychology research. Studies should examine how regular interactions with AI systems affect adolescent social-emotional development, relationship formation skills, emotional regulation, identity development, and mental health. Research should also explore whether and under what conditions AI-based support might complement or undermine human relationships and professional mental health services.
Academic Integrity in the AI Era: Future research should explore effective approaches to maintaining academic integrity while acknowledging that AI tools are increasingly ubiquitous. Studies could examine different school policies regarding AI use on assignments, the effectiveness and accuracy of AI detection tools, the educational impact of various consequences for inappropriate AI use, and alternative assessment methods that remain meaningful in an AI-rich environment. Research should also investigate how to help students develop appropriate judgment about when AI use enhances versus diminishes learning.
Equity and Access Research: The identified "AI divide" based on income and geographic location requires deeper investigation. Research should examine what drives these disparities (access to technology, teacher training, school resources, community attitudes), how they affect educational and career opportunities, and what interventions might promote more equitable AI access and literacy. Studies should also investigate whether AI adoption is exacerbating or ameliorating existing educational inequities.
Special Education and AI Research: The rapid increase in AI use for IEP development (from 39% to 57% of special education teachers in one year, with 30% now using AI to fully write IEPs) demands focused research on quality, appropriateness, and outcomes. Studies should examine whether AI-assisted IEPs are as individualized and effective as human-developed plans, whether AI systems exhibit bias in recommendations for students with disabilities, and how to ensure AI augments rather than replaces the expertise and relationship-building that are central to effective special education.
Deepfake and NCII Prevention and Response Research: With over one-quarter of teachers and nearly one-third of students reporting awareness of deepfakes at their schools, research should evaluate prevention programs, school response protocols, legal remedies, and support services for victims. Studies should examine which consequences most effectively deter deepfake creation and sharing while maintaining educational and restorative approaches, and should investigate the long-term psychological effects on victims and effective therapeutic interventions.
Privacy and Civil Rights in AI Systems: Future research should investigate how AI systems in schools handle sensitive information about vulnerable student populations, including students with disabilities, LGBTQ+ students, and immigrant students. Studies should audit AI systems for bias, examine data governance practices, evaluate the effectiveness of privacy training for school staff, and document cases where AI systems or data practices have violated student privacy or civil rights.
Student Activity Monitoring Effectiveness and Harms: Despite nearly ubiquitous adoption (89% of teachers report their schools use it), student activity monitoring remains controversial with documented harms (24% of students report a student was contacted by law enforcement). Research should rigorously evaluate whether student activity monitoring actually prevents the harms it aims to address (self-harm, violence, etc.) or whether it primarily generates false positives and chills legitimate student expression. Studies should also examine alternatives to surveillance-based approaches to student safety.
Stakeholder Engagement and Governance Models: The low rates of schools seeking parent input on AI use (only 20% of parents report being asked) and the misalignment of stakeholder priorities for AI training suggest the need for research on effective community engagement models. Studies should examine governance structures that include diverse stakeholders in AI decision-making, communication strategies that improve transparency about AI use, and processes for addressing conflicts between different stakeholder groups' priorities and values.
Teacher Preparation and Professional Development: Future research should evaluate different models for preparing preservice and in-service teachers to use AI responsibly and effectively. Studies should compare various professional development approaches, identify the knowledge and skills teachers most need, examine how to prevent over-reliance on AI tools that may deskill teachers or distance them from students, and investigate how to help teachers maintain professional judgment and expertise in an AI-augmented environment.
Title and Authors: "Hand in Hand: Schools' Embrace of AI Connected to Increased Risks to Students" by Elizabeth Laird, Maddy Dwyer, and Hannah Quay-de la Vallee, Center for Democracy & Technology.
Published on: October 2025
Published by: Center for Democracy & Technology (CDT)