An AI-enhanced Anti-Plagiarism Literacy Practices programme significantly improves undergraduate students' understanding of plagiarism, reduces plagiarism behaviors, and enhances writing quality through human-AI collaboration.
Objective: The main goal of this study was to evaluate the effectiveness of an Anti-Plagiarism Literacy Practices (APLP) programme that incorporates a customized AI tool called the Academic Writing System (AWS) in supporting undergraduate students' anti-plagiarism literacy practices. Specifically, the researchers aimed to investigate how this programme affects students' perceptions, attitudes, and behaviors related to plagiarism prevention in academic writing scenarios. The study sought to address research gaps in how to effectively teach plagiarism prevention in academic writing practices and how to leverage AI technology to support anti-plagiarism learning experiences, moving beyond traditional "detect-to-punish" approaches toward more educationally-focused "educate-to-learn" strategies.
Methods: The researchers employed a design-based research methodology conducted across three iterative stages involving 167 undergraduate students and their instructors from a public university in China. The first two stages focused on optimizing the APLP programme and refining the AWS tool functionalities through pilot testing with 60 students. The third stage utilized a quasi-experimental design with 107 students (55 experimental, 52 control) to assess the programme's effectiveness. The APLP programme was grounded in Teaching for Understanding and Group and Learning Dynamics theories, incorporating six structured activities: draft creation with AWS support, analysis of writing examples with varying plagiarism levels, peer evaluation using AWS, in-person plagiarism discussions, instructor-led lessons on plagiarism detection, and reflective revision. The AWS tool featured four customized modules: Smart Literature Review (acting as an "AI-tutor"), Writing Support, Peer Interaction, and Smart Plagiarism Analysis. Data collection included the "Perceptions of Plagiarism Survey" (13 items using 6-point Likert scales), semi-structured interviews with 50% of participants and instructors, and coursework analysis using a novel "Plagiarism Assessment Scale" and writing quality assessment across four dimensions (Task Response, Coherence and Cohesion, Lexis and Language Use, and Literature Citation). Statistical analyses included Mann-Whitney U tests and independent t-tests to compare group differences.
Key Findings: The study revealed significant positive impacts across multiple dimensions of anti-plagiarism literacy practices. In terms of perception changes, students in the experimental group demonstrated substantial improvement in identifying implicit plagiarism types, particularly showing significant differences in recognizing idea plagiarism, image plagiarism, lack of source citation, and patchwork plagiarism compared to the control group. However, an unexpected finding was that experimental group students showed decreased agreement with the importance of avoiding plagiarism (Item 3), potentially reflecting a recalibration of understanding rather than diminished concern. Regarding behavioral changes, the experimental group showed marked improvement in actual writing practices: the average maximum number of consecutively copied Chinese characters decreased from 85 to 50 (compared to 85 to 75 in the control group), shifting their plagiarism level from "moderate" to "minor." Writing quality improvements were substantial, with experimental group scores rising from 58 to 79 out of 100, particularly in Coherence and Cohesion and Literature Citation quality, while the control group improved modestly from 58 to 67. Attitudinal changes were evidenced through qualitative feedback, with students expressing increased confidence and understanding, such as "It has provided me with valuable tips for my coursework, and I believe more practice will help me improve my anti-plagiarism skills and writing." The participating instructor also showed significant perspective shifts, evolving from viewing plagiarism as "serious yet frequently overlooked" to "valuing and actively addressing" plagiarism concerns, expressing strong intent to integrate the programme into future courses.
Implications: These findings have significant implications for academic integrity education and AI integration in higher education. The study demonstrates that AI can be effectively leveraged not just for plagiarism detection but as a constructive educational tool that supports learning and skill development. The human-AI collaboration model presented offers a framework for ethical AI implementation in education, where AI handles "facts" (data analysis, feedback provision) while humans manage "values" (guidance, oversight, evaluation). For educational practice, the research provides evidence that systematic, process-based anti-plagiarism instruction integrated into regular coursework is more effective than traditional punitive approaches. The APLP programme's success suggests that students benefit from experiencing the plagiarism identification and prevention process firsthand rather than simply learning about it theoretically. The study also contributes to understanding how AI can address educational scalability challenges, enabling instructors to provide more comprehensive plagiarism prevention education despite time and resource constraints. From a policy perspective, the research supports shifting institutional approaches from reactive "detect-to-punish" strategies toward proactive educational interventions that build student competencies. The findings also highlight the importance of developing students' critical evaluation skills for AI-generated content, preparing them for an AI-integrated academic environment.
Limitations: The study acknowledges several important limitations that affect the generalizability and scope of findings. The sample size was relatively small (167 total participants, with only 107 in the final quasi-experimental stage), limiting the statistical power and generalizability of results. The study was conducted exclusively within education programmes at a single Chinese university, raising questions about applicability across different disciplines, cultural contexts, and educational systems. The intervention duration was brief, preventing assessment of long-term retention and sustained behavior change in anti-plagiarism practices. The research relied heavily on self-reported data through surveys and interviews, which may introduce response bias and social desirability effects. The novel "Plagiarism Assessment Scale" developed for this study, while validated by instructors and experts, lacks extensive psychometric validation across diverse contexts. The study did not examine potential variations in effectiveness across different student characteristics such as academic ability, prior writing experience, or technological proficiency. Additionally, the research focused specifically on Chinese language plagiarism detection (using Chinese character counts), which may not translate directly to other languages with different structural characteristics. The control group received no intervention rather than an alternative treatment, making it difficult to isolate the specific contributions of AI components versus general anti-plagiarism instruction.
Future Directions: The researchers suggest several promising avenues for future investigation to address current limitations and expand understanding of AI-supported anti-plagiarism education. Future studies should involve larger, more diverse samples across multiple institutions, disciplines, and cultural contexts to improve generalizability and examine potential variation in programme effectiveness. Longitudinal research designs extending over full academic terms or years would provide insights into long-term retention of anti-plagiarism skills and sustained behavior change. Research should explore the programme's effectiveness across different student populations, including international students, graduate students, and students with varying levels of academic preparation. Comparative studies examining different AI tool configurations and human-AI collaboration models would help optimize the balance between automated support and human guidance. Future work should investigate the programme's adaptability to different disciplines, which may have varying citation conventions, writing styles, and plagiarism concerns. Cross-linguistic studies would examine how the approach translates to different languages and writing systems beyond Chinese. Research should also explore the development of more sophisticated AI capabilities, such as detecting conceptual plagiarism and providing more nuanced feedback on writing quality. Studies examining instructor training needs and implementation challenges would provide practical guidance for programme adoption. Additionally, research into the ethical implications of AI-mediated plagiarism education, including privacy concerns and algorithmic bias, would ensure responsible implementation. Investigation of cost-effectiveness and scalability factors would inform institutional decision-making about programme adoption and sustainability.
Title and Authors: "Enhancing anti-plagiarism literacy practices among undergraduates with AI" by Yin Zhang, Yonghui Liu, and Xinghua Wang.
Published On: April 21, 2025
Published By: Interactive Learning Environments (Taylor & Francis Group)