article Article Summary
May 06, 2025
Blog Image

Researchers have developed and validated a reliable learning progression with assessment tools to measure elementary students' understanding of AI concepts, addressing a critical gap in K-12 AI education.

Researchers have developed and validated a reliable learning progression with assessment tools to measure elementary students' understanding of AI concepts, addressing a critical gap in K-12 AI education.

Objective: The main goal of this study was to introduce a novel AI learning progression for upper-elementary students and validate quantitative assessment measures for evaluating students' understanding of core AI concepts, specifically Computer Vision and Machine Learning.

Methods: The research was conducted in two parts:

  1. Development of a hypothetical learning progression for AI concepts based on cognitive analysis of diverse data sources, including the AI4K12 big ideas, student performance on assessment items, classroom activities from prior implementations, and subject matter expert input.
  2. Design and validation of assessment items aligned with the learning progression through Rasch model analysis, examining their psychometric properties to ensure reliable placement of students within the progression.

The study involved 105 upper-elementary students (4th and 5th graders) from six classrooms across urban and semi-urban settings. Two subscales were created: a Computer Vision subscale (10 items) and a Machine Learning subscale (6 items), and Differential Item Functioning (DIF) analysis was conducted based on grade level and gender.

Key Findings:

  • The researchers successfully developed a learning progression organized around six concepts related to Computer Vision and Machine Learning, with each concept structured across three levels of increasing complexity.
  • Rasch analysis confirmed that both assessment subscales demonstrated good internal consistency and reliability, with acceptable item fit statistics.
  • The Wright maps (visual representations of both person abilities and item difficulties) showed that most items aligned with the anticipated difficulties defined by the hypothetical learning progression.
  • Some items exhibited unexpected difficulty patterns, suggesting that students may not perceive AI concepts in a strictly linear manner as initially hypothesized.
  • Only two items showed moderate Differential Item Functioning (DIF) across gender or grade level, indicating the assessments functioned fairly across different student groups.
  • The validation confirmed that the assessment tools can accurately place students along the learning progression for AI concepts.

Implications: The findings contribute significantly to the field of AI education by:

  • Providing the first empirically validated learning progression and assessment framework for teaching AI concepts to upper-elementary students.
  • Establishing a structured pathway for teaching AI in K-12 classrooms that is developmentally appropriate and builds upon students' prior knowledge.
  • Creating reliable assessment tools that help teachers identify where students are in their understanding and adjust instruction accordingly.
  • Supporting curriculum designers in developing age-appropriate AI educational materials that connect abstract concepts to real-world applications.
  • Addressing the gap in AI education standards for younger learners, as current standards primarily focus on high school levels.

Limitations:

  • The sample size was relatively small (n=105) and lacked racial diversity, as the student population was predominantly white.
  • The assessment items were selected and refined from an existing AI curriculum rather than being specifically designed for the learning progression from the outset.
  • Limited pilot testing of assessment items with a small sample of students before implementation.
  • The study focused only on two AI constructs (Computer Vision and Machine Learning) rather than covering all five big ideas from the AI4K12 framework.
  • The assessments relied heavily on multiple-choice questions, which may not fully capture the nuances of student understanding, particularly for complex concepts.

Future Directions:

  • Expand validation studies with larger, more diverse samples to enhance the generalizability of the learning progression.
  • Develop additional assessment items for other AI constructs such as Data Collection and Bias.
  • Investigate how different instructional methods and learning environments impact student progression through the learning trajectory.
  • Explore how cultural and contextual factors influence AI education and refine the learning progression accordingly.
  • Design more varied assessment formats beyond multiple-choice questions to capture deeper understanding of complex AI concepts.
  • Examine long-term retention and application of AI concepts as students advance through grade levels.

Title and Authors: "Measuring upper-elementary students' understanding of AI concepts – a Rasch model analysis" by Srijita Chakraburty, Krista D. Glazewski, Cindy E. Hmelo-Silver, Dubravka Svetina Valdivia, Anne Ottenbreit-Leftwich, Bradford Mott, and James Lester.

Published On: The article was accepted on April 9, 2025, with revisions noted from March 22, 2025, and May 31, 2024.

Published By: Information and Learning Sciences journal (Emerald Publishing Limited).

Related Link

Comments

Please log in to leave a comment.