An 18-hour AIPACK training model significantly improves AI knowledge, pedagogical skills, and content integration capabilities among both inservice and preservice non-STEM elementary teachers, with no significant performance differences between the two groups.
Objective
The main goal of this study was to develop and evaluate a nationwide teacher training model designed to enhance AI Pedagogical and Content Knowledge (AIPACK) among inservice elementary teachers (ISETs) and preservice elementary teachers (PSETs) with non-STEM backgrounds. Specifically, the researchers aimed to: (1) assess the initial AIPACK levels of both teacher groups before training, (2) evaluate the effectiveness of the AIPACK training model in improving AI knowledge (AIK), AI content knowledge (AICK), AI pedagogical knowledge (AIPK), and integrated AIPACK, and (3) compare the learning improvements between ISETs and PSETs following the training intervention.
Given the rapid integration of artificial intelligence technologies in education and the identification of significant gaps in teacher preparedness—particularly among elementary teachers who typically lack formal AI training—this research addressed an urgent need for structured professional development. The study was grounded in UNESCO's AI Competency Framework for Teachers and Celik's intelligent TPACK framework, extending the well-established Technological Pedagogical and Content Knowledge (TPACK) model to specifically address AI integration in non-STEM elementary education contexts.
Methods
The research employed a quasi-experimental design with a nonequivalent groups pretest-posttest approach. The study included 59 participants: 31 ISETs from 22 elementary schools across northern, central, and southern Taiwan (experimental group) and 28 PSETs enrolled in a teacher preparation program (control group). All participants had non-STEM educational backgrounds.
The AIPACK training program consisted of 18 hours of instruction delivered over different timeframes depending on the group: ISETs completed the training intensively over 2.5 days during summer vacation, while PSETs completed the same content over 6 weeks as part of their regular semester curriculum. Both groups received identical instructional methods and materials, conducted primarily face-to-face with participants using tablets or laptops, supplemented by online video content.
The training followed a five-stage progression based on the AIPACK framework: (1) Introduction to TPACK and AIPACK concepts; (2) AIK stage focusing on AI technologies, machine learning, natural language processing, and ethical considerations through online videos (1 hour); (3) AIPK stage introducing general AI-driven tools, platforms, and resources for instruction and learning through expert modeling (6 hours); (4) AICK stage demonstrating subject-specific AI integration in Chinese, mathematics, and natural science through hands-on activities with expert teachers (9 hours); and (5) AIPACK stage featuring six expert teachers providing 10-minute instructional videos on AI integration across six elementary subjects (1 hour online).
Data collection utilized a 26-item AIPACK questionnaire developed by the authors, assessed on a 5-point Likert scale. The questionnaire demonstrated strong reliability (overall scale α=0.98; subscales ranging from α=0.88 to α=0.95) and was validated through three rounds of expert reviews. Participants completed the questionnaire as both pretest and posttest. Data analysis included descriptive statistics, one-sample t-tests, independent-sample t-tests, dependent-sample t-tests, and one-way analysis of covariance with effect size calculations.
The training incorporated Taiwan's Ministry of Education AI learning companion (TALPer) and various AI-driven tools including ChatGPT, Napkin, MagicSchool, Suno, Gamma, Midjourney, Copilot, and Claude for generating lesson plans, instructional materials, assessments, and multimedia content.
Key Findings
The study produced several significant findings demonstrating the effectiveness of the AIPACK training model:
Initial AIPACK Levels: Before training, both ISETs and PSETs demonstrated sufficient prior knowledge in most AIK areas, with mean scores significantly above 3.5 on most items. However, both groups showed insufficient understanding in three specific areas: basic programming principles underlying AI tools, selecting appropriate AI tools for classroom management, and taking leadership roles in facilitating AI-integrated lesson planning. Importantly, no significant differences existed between ISETs and PSETs on any of the four AIPACK dimensions at pretest, indicating comparable baseline knowledge levels.
Effectiveness of Training: The AIPACK training model produced statistically significant improvements across all four knowledge dimensions for both teacher groups. For ISETs, dependent t-tests revealed significant gains in AIK (t=4.09, p<.01), AICK (t=4.17, p<.01), AIPK (t=4.49, p<.01), and AIPACK (t=4.58, p<.01). PSETs demonstrated even stronger improvement effects: AIK (t=5.42, p<.01), AICK (t=5.79, p<.01), AIPK (t=5.80, p<.01), and AIPACK (t=5.95, p<.01). Effect sizes were larger for PSETs compared to ISETs across all dimensions, suggesting that preservice teachers may have gained slightly more from the training.
Areas of Non-Improvement: Item-level analysis revealed that certain knowledge areas did not show significant improvement. For both ISETs and PSETs, understanding of potential ethical concerns and risks associated with AI applications (AIK #5) did not significantly improve. ISETs also showed no significant improvement in using AI-driven tools to prepare instructional materials (AICK #1), while PSETs demonstrated no significant change in their perceived sufficiency of knowledge to use AI-driven tools (AIK #4).
Group Comparisons: One-way analysis of covariance, controlling for pretest scores, revealed no significant differences between ISETs and PSETs in posttest scores across any AIPACK dimension (AIK: F=1.05, p=.31; AICK: F=0.61, p=.44; AIPK: F=0.91, p=.34; AIPACK: F=0.53, p=.47). This finding indicates that both groups achieved similar knowledge levels after training regardless of their teaching experience status.
Implications
This research makes several important contributions to the field of AI in education:
Framework Validation: The study validates the AIPACK framework as an effective extension of TPACK specifically for AI integration in education. By demonstrating that non-STEM elementary teachers can successfully develop AI competencies through structured training, the research supports the framework's applicability beyond computer science and STEM contexts where most previous AI education research has focused.
Professional Development Model: The findings provide empirical evidence for a scalable, practical professional development model that can be implemented both as intensive summer training for inservice teachers and as integrated coursework for preservice teachers. The model's effectiveness across both delivery formats suggests flexibility in implementation to accommodate different institutional and scheduling constraints.
Bridging the AI Readiness Gap: By addressing the identified gap in teachers' preparedness to integrate AI into instruction—particularly among elementary teachers with non-STEM backgrounds—this research offers a concrete solution to a pressing educational challenge. The training model equips teachers with foundational AI knowledge before progressing to pedagogical and content-specific applications, following a scaffolded approach that supports learning progression.
Subject-Specific Integration: The incorporation of expert teacher modeling in subject-specific contexts (Chinese, mathematics, natural science) demonstrates how AI tools can be meaningfully integrated into traditional elementary curriculum areas, not just STEM subjects. This broadens the potential impact of AI in education beyond computational thinking and coding to encompass the full elementary curriculum.
Equal Effectiveness Across Experience Levels: The finding that ISETs and PSETs achieved similar outcomes despite their different levels of teaching experience suggests that AI integration competency is relatively independent of general teaching experience. This has implications for how teacher education programs and professional development initiatives are designed, indicating that AI training can be equally effective for teachers at different career stages.
Limitations
The authors acknowledge several important limitations that should be considered when interpreting the findings:
Sample Size and Scope: The study included only 59 participants from a single geographic region (Taiwan), limiting the generalizability of findings to other educational contexts, cultures, and teacher populations. The small sample size also constrains statistical power for detecting more nuanced differences between groups.
Delivery Format Differences: A significant methodological limitation stems from the different delivery formats for the two groups—ISETs received intensive 2.5-day training while PSETs experienced the same content spread over 6 weeks. This difference may have introduced confounding effects related to cognitive load, knowledge retention, and learning consolidation that were not controlled for in the analysis. The researchers acknowledge this could have impacted the validity of direct comparisons between groups.
Limited Practical Application Opportunities: Both groups had limited opportunities to implement their newly acquired AIPACK in authentic teaching contexts. ISETs participated during summer vacation when schools were not in session, while PSETs were student teachers with restricted teaching responsibilities. The study therefore primarily measured knowledge acquisition rather than practical implementation effectiveness.
Quantitative-Only Approach: The research relied exclusively on quantitative self-report questionnaire data, lacking qualitative insights into participants' experiences, perceived challenges, or the mechanisms underlying the training's effectiveness. Interviews, classroom observations, or artifact analysis could have provided richer understanding of how teachers actually applied their AIPACK.
Ethics Training Gap: The finding that ethical awareness did not significantly improve for either group suggests the 10-minute ethics component was insufficient. The training did not comprehensively address fairness, transparency, accountability, inclusiveness, or provide relevant case studies and regulatory frameworks, representing a content validity concern.
Assessment Timing: The posttest was administered immediately following training completion, providing no information about knowledge retention over time or long-term impact on teaching practices. The durability of the observed improvements remains unknown.
Future Directions
The authors suggest several promising directions for future research:
Longitudinal Studies: Future research should follow participants over extended periods to examine the long-term retention of AIPACK and whether the training produces sustained changes in classroom practices. Studies should track whether teachers actually implement AI-integrated instruction and how their competencies evolve with practical experience.
Authentic Implementation Research: Studies should provide both ISETs and PSETs with structured opportunities to implement AI-integrated instruction in real classroom contexts and examine the relationship between AIPACK knowledge and actual teaching effectiveness. Research could investigate how AIPACK translates into student learning outcomes.
Expanded and Diverse Samples: Future work should include teachers from multiple universities, regions, and countries to establish the model's effectiveness across diverse educational contexts. Larger sample sizes would enable more sophisticated analyses, including examination of moderating variables such as prior technology experience, subject specialization, and demographic factors.
Standardized Delivery Formats: To enable valid comparisons between inservice and preservice teachers, future studies should standardize delivery schedules or explicitly investigate how different delivery formats (intensive vs. distributed) affect learning outcomes, retention, and application.
Enhanced Ethics Training: Given the lack of improvement in ethical awareness, future training models should substantially expand ethics content to comprehensively address fairness, transparency, accountability, bias, privacy, and inclusiveness. Training should incorporate case studies, group discussions on ethical dilemmas (such as using AI for automated essay scoring), and examination of relevant regulations and policies.
Mixed Methods Approaches: Future research should employ qualitative methods including interviews, focus groups, classroom observations, and analysis of lesson plans or instructional artifacts to understand the mechanisms underlying AIPACK development and the contextual factors influencing implementation. This would provide insights into the types of support teachers need to successfully integrate AI.
Teacher Attitudes and Beliefs: Research should examine how teachers' attitudes, beliefs, self-efficacy, and concerns about AI influence their AIPACK development and their willingness to adopt AI-integrated instruction. Studies could explore factors that facilitate or inhibit AI adoption in elementary classrooms.
Curriculum Expansion: Future training models could expand to include additional subjects beyond Chinese, mathematics, and natural science, such as social studies, arts, and physical education, to demonstrate AI's potential across the full elementary curriculum. Research could also investigate specialized applications such as AI integration in special education contexts.
Comparative Effectiveness Studies: Research could compare the AIPACK model to alternative professional development approaches to identify the most effective and efficient training strategies. Studies could also investigate optimal training duration, ideal sequencing of content, and the relative importance of different model components.
Title and Authors
Title: "A model for developing AI pedagogical and content knowledge in inservice and preservice non-STEM elementary teachers"
Authors: Ya-Ching Fan, Bor-Chen Kuo, Pei-Chen Wu, and Chen-Huei Liao
Author Affiliations:
- Ya-Ching Fan: Department of Education, National Taichung University of Education, Taichung, Taiwan
- Bor-Chen Kuo and Pei-Chen Wu: Graduate Institute of Educational Information and Measurement, National Taichung University of Education, Taichung, Taiwan
- Chen-Huei Liao: Department of Special Education, National Taichung University of Education, Taichung, Taiwan
Published On
The article was received on June 3, 2025, and accepted on September 29, 2025. It was published online on October 15, 2025.
Published By
The article was published in Education and Information Technologies, a Springer journal (https://doi.org/10.1007/s10639-025-13813-0), under exclusive license to Springer Science+Business Media, LLC, part of Springer Nature.