Should Schools Disable AI Detection? Lessons from Curtin's 2026 Policy Shift
Universities worldwide are reconsidering their approaches to AI detection in education. Credit: Unsplash
Introduction
In January 2026, Curtin University in Perth, Australia made headlines when it announced a significant policy shift: the institution would disable Turnitin's AI detection functionality across all campuses and study periods. This decision sent ripples through the global academic community, reigniting debates about the effectiveness, ethics, and future of AI detection in education.
Curtin's move wasn't made in isolation—it reflected growing concerns among educators and researchers about the reliability and fairness of AI detection tools. But it also raised important questions: Without these tools, how can institutions maintain academic integrity? Is disabling detection a progressive step forward or a dangerous retreat?
This article analyzes Curtin's decision, weighs the arguments on both sides, and explores alternatives for institutions considering similar policy changes.
Understanding Curtin's Decision
The Announcement
Curtin University's Academic Board announced in early 2026 that AI detection would be disabled, citing several factors:
Reliability concerns about the accuracy of detection tools
Equity issues related to higher false positive rates for certain student populations
A desire to focus on education rather than surveillance
Recognition that AI use is becoming normalized in professional settings
The university emphasized that disabling detection did not mean abandoning academic integrity—rather, it signaled a shift toward different approaches.
The Australian Context
Curtin's decision followed discussions across Australian higher education about AI detection:
Several Australian universities had already modified their AI detection practices
National conversations about AI in education had intensified throughout 2025
Research from Australian academics had highlighted detection tool limitations
The Tertiary Education Quality and Standards Agency (TEQSA) had encouraged institutions to develop thoughtful AI policies
Australian universities are leading discussions on balancing AI detection with educational values. Credit: Unsplash
The Case for Disabling AI Detection
Accuracy and Reliability Concerns
Critics of AI detection tools point to significant limitations:
False Positive Problems
Research has shown that detection tools can flag human-written content as AI-generated
False positive rates, while improved, remain significant enough to cause harm
Individual false accusations can have serious consequences for students
The psychological burden of being wrongly accused can be substantial
Detection Limitations
AI writing tools continue to evolve faster than detection capabilities
Paraphrased or edited AI content often evades detection
Detection becomes less reliable with shorter text samples
Different AI tools may produce content with varying detectability
Bias and Equity Issues
Perhaps the most compelling arguments for disabling detection involve equity:
Non-Native English Speakers
Multiple studies have shown that AI detection tools flag work by non-native English speakers at higher rates. This occurs because:
Writing that follows learned patterns may resemble AI outputs
Simpler sentence structures common in L2 writing can trigger detection
The training data for detection models may not adequately represent diverse writing styles
Neurodivergent Students
Students with certain learning differences may produce writing that triggers false positives:
Highly structured writing approaches
Consistent sentence patterns due to learned strategies
Use of templates or frameworks as accommodations
First-Generation College Students
Students from backgrounds with less exposure to academic writing conventions may also face higher false positive rates.
Philosophical Arguments
Beyond practical concerns, some argue against detection on principle:
Surveillance Culture
Detection tools create an atmosphere of suspicion rather than trust
Students may feel presumed guilty until proven innocent
The adversarial dynamic can damage student-teacher relationships
Changing Professional Norms
AI tools are increasingly used in professional settings
Prohibiting AI entirely may inadequately prepare students for careers
The goal should be teaching appropriate use, not blanket prohibition
Assessment Design
If assignments can be completed by AI, perhaps they need redesigning
Detection is a symptom, not a solution, to assessment problems
Focus should be on creating assessments that require human engagement
The Case for Maintaining AI Detection
The Integrity Argument
Proponents of AI detection emphasize its importance for academic integrity:
Deterrence Effect
Detection tools discourage inappropriate AI use
Students who know their work will be checked are less likely to misuse AI
The presence of detection supports honest students
Fairness to Honest Students
Without detection, students who use AI inappropriately gain unfair advantages
This disadvantages students who complete work authentically
Detection helps level the playing field
Maintaining Standards
Academic credentials should reflect genuine learning
Employers and society rely on the authenticity of educational achievements
Weakening integrity protections undermines educational value
Practical Concerns About Disabling Detection
Increased Misconduct
Without detection, AI misuse may increase
Some students will exploit the absence of consequences
Academic integrity violations could become normalized
Educator Burden
Without detection tools, educators must identify AI use manually
This is time-consuming and often unreliable
Faculty workload would increase significantly
Institutional Risk
Institutions could face criticism for weakening integrity protections
Employers may question the value of credentials
Accreditation bodies might raise concerns
Improving Detection Rather Than Abandoning It
Many argue that the solution is better detection, not no detection:
Continue improving accuracy and reducing bias
Use detection as one tool among many, not the sole arbiter
Combine technological tools with pedagogical approaches
Train educators to interpret detection results appropriately
Alternatives for Maintaining Academic Integrity
Whether or not institutions disable AI detection, alternatives exist for maintaining integrity:
Assessment Redesign
Process-Based Assessment
Require multiple drafts with evidence of development
Include reflection components about the writing process
Use tools like Turnitin Clarity that track writing evolution
Assess the journey, not just the destination
Authentic Assessment
Connect assignments to real-world, personalized contexts
Require analysis of current or local events
Include personal reflection that AI cannot authentically provide
Design projects that require original data collection
Oral Components
Include presentation or defense elements
Ask students to explain their reasoning verbally
Use oral examinations for high-stakes assessments
Combine written and oral modalities
In-Class Writing
Include timed, supervised writing components
Compare in-class work to submitted assignments
Use writing samples to establish baseline capabilities
Balance convenience with controlled conditions
Educational Approaches
AI Literacy Education
Teach students about AI capabilities and limitations
Discuss ethical considerations openly
Help students understand why authentic work matters
Prepare students for professional AI use
Clear Policies and Communication
Develop detailed, specific AI use policies
Communicate expectations clearly for each assignment
Provide examples of acceptable and unacceptable practices
Create safe spaces for questions about AI use
Honor Code Emphasis
Strengthen institutional honor codes
Emphasize student responsibility for integrity
Create peer accountability mechanisms
Build a culture of academic honesty
Hybrid Approaches
Some institutions are adopting middle-ground solutions:
Selective detection: Using AI detection for high-stakes assignments only
Transparent detection: Sharing detection results with students as educational tools
Optional detection: Allowing departments to choose whether to use detection
Verification processes: Using detection flags to trigger additional review rather than automatic consequences
Lessons for Other Institutions
Curtin's decision offers several lessons for institutions worldwide:
1. One Size Doesn't Fit All
Different institutions, disciplines, and contexts may warrant different approaches. A research university might make different choices than a community college; STEM programs might differ from humanities programs.
2. Detection Alone Is Insufficient
Even institutions that maintain AI detection should recognize that it cannot be the sole approach to academic integrity. Detection must be part of a broader strategy that includes education, policy, and assessment design.
3. Transparency Is Essential
Whatever approach an institution takes, clear communication with students and faculty is critical. Uncertainty about AI policies creates anxiety and potential unfairness.
4. Regular Review Is Necessary
As AI technology evolves, policies must evolve too. Institutions should plan for regular review and revision of their approaches.
5. Student Voice Matters
Including students in policy development can improve both the policies themselves and student buy-in for integrity measures.
Conclusion
Curtin University's decision to disable AI detection represents one response to the complex challenges AI poses for academic integrity. Whether one agrees with this choice or not, it has forced important conversations about the role of detection tools, the meaning of academic integrity in the AI era, and the best ways to prepare students for their futures.
The debate is not simply about technology—it's about fundamental questions of trust, fairness, and educational purpose. As AI continues to evolve, institutions will need to continually reassess their approaches, learning from experiments like Curtin's to develop practices that uphold integrity while remaining practical and equitable.
There may be no single right answer. But the willingness to question assumptions and try new approaches is itself a valuable contribution to the ongoing conversation about AI in education.
TEQSA. (2025). "Artificial Intelligence in Higher Education: Guidance for Providers." Tertiary Education Quality and Standards Agency. Retrieved from https://www.teqsa.gov.au/ai-guidance
Australian Higher Education Research Consortium. (2025). "AI Detection Policies Across Australian Universities: A Comparative Analysis."
International Center for Academic Integrity. (2025). "Rethinking Academic Integrity in the Age of AI." Retrieved from https://academicintegrity.org/resources
Weber-Wulff, D., et al. (2023). "Testing of Detection Tools for AI-Generated Text." arXiv preprint arXiv:2306.15666.