Should Schools Disable AI Detection? Lessons from Curtin's 2026 Policy Shift

University campus building exterior
Universities worldwide are reconsidering their approaches to AI detection in education. Credit: Unsplash

Introduction

In January 2026, Curtin University in Perth, Australia made headlines when it announced a significant policy shift: the institution would disable Turnitin's AI detection functionality across all campuses and study periods. This decision sent ripples through the global academic community, reigniting debates about the effectiveness, ethics, and future of AI detection in education.

Curtin's move wasn't made in isolation—it reflected growing concerns among educators and researchers about the reliability and fairness of AI detection tools. But it also raised important questions: Without these tools, how can institutions maintain academic integrity? Is disabling detection a progressive step forward or a dangerous retreat?

This article analyzes Curtin's decision, weighs the arguments on both sides, and explores alternatives for institutions considering similar policy changes.

Understanding Curtin's Decision

The Announcement

Curtin University's Academic Board announced in early 2026 that AI detection would be disabled, citing several factors:

  1. Reliability concerns about the accuracy of detection tools
  2. Equity issues related to higher false positive rates for certain student populations
  3. A desire to focus on education rather than surveillance
  4. Recognition that AI use is becoming normalized in professional settings

The university emphasized that disabling detection did not mean abandoning academic integrity—rather, it signaled a shift toward different approaches.

The Australian Context

Curtin's decision followed discussions across Australian higher education about AI detection:

Students studying together in university library
Australian universities are leading discussions on balancing AI detection with educational values. Credit: Unsplash

The Case for Disabling AI Detection

Accuracy and Reliability Concerns

Critics of AI detection tools point to significant limitations:

False Positive Problems

Detection Limitations

Bias and Equity Issues

Perhaps the most compelling arguments for disabling detection involve equity:

Non-Native English Speakers

Multiple studies have shown that AI detection tools flag work by non-native English speakers at higher rates. This occurs because:

Neurodivergent Students

Students with certain learning differences may produce writing that triggers false positives:

First-Generation College Students

Students from backgrounds with less exposure to academic writing conventions may also face higher false positive rates.

Philosophical Arguments

Beyond practical concerns, some argue against detection on principle:

Surveillance Culture

Changing Professional Norms

Assessment Design

The Case for Maintaining AI Detection

The Integrity Argument

Proponents of AI detection emphasize its importance for academic integrity:

Deterrence Effect

Fairness to Honest Students

Maintaining Standards

Practical Concerns About Disabling Detection

Increased Misconduct

Educator Burden

Institutional Risk

Improving Detection Rather Than Abandoning It

Many argue that the solution is better detection, not no detection:

Alternatives for Maintaining Academic Integrity

Whether or not institutions disable AI detection, alternatives exist for maintaining integrity:

Assessment Redesign

Process-Based Assessment

Authentic Assessment

Oral Components

In-Class Writing

Educational Approaches

AI Literacy Education

Clear Policies and Communication

Honor Code Emphasis

Hybrid Approaches

Some institutions are adopting middle-ground solutions:

Lessons for Other Institutions

Curtin's decision offers several lessons for institutions worldwide:

1. One Size Doesn't Fit All

Different institutions, disciplines, and contexts may warrant different approaches. A research university might make different choices than a community college; STEM programs might differ from humanities programs.

2. Detection Alone Is Insufficient

Even institutions that maintain AI detection should recognize that it cannot be the sole approach to academic integrity. Detection must be part of a broader strategy that includes education, policy, and assessment design.

3. Transparency Is Essential

Whatever approach an institution takes, clear communication with students and faculty is critical. Uncertainty about AI policies creates anxiety and potential unfairness.

4. Regular Review Is Necessary

As AI technology evolves, policies must evolve too. Institutions should plan for regular review and revision of their approaches.

5. Student Voice Matters

Including students in policy development can improve both the policies themselves and student buy-in for integrity measures.

Conclusion

Curtin University's decision to disable AI detection represents one response to the complex challenges AI poses for academic integrity. Whether one agrees with this choice or not, it has forced important conversations about the role of detection tools, the meaning of academic integrity in the AI era, and the best ways to prepare students for their futures.

The debate is not simply about technology—it's about fundamental questions of trust, fairness, and educational purpose. As AI continues to evolve, institutions will need to continually reassess their approaches, learning from experiments like Curtin's to develop practices that uphold integrity while remaining practical and equitable.

There may be no single right answer. But the willingness to question assumptions and try new approaches is itself a valuable contribution to the ongoing conversation about AI in education.


References

  1. Curtin University. (2026). "Academic Integrity and AI: Policy Updates for 2026." Retrieved from https://www.curtin.edu.au/academic-integrity
  2. TEQSA. (2025). "Artificial Intelligence in Higher Education: Guidance for Providers." Tertiary Education Quality and Standards Agency. Retrieved from https://www.teqsa.gov.au/ai-guidance
  3. Liang, W., et al. (2023). "GPT detectors are biased against non-native English writers." Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779
  4. Australian Higher Education Research Consortium. (2025). "AI Detection Policies Across Australian Universities: A Comparative Analysis."
  5. International Center for Academic Integrity. (2025). "Rethinking Academic Integrity in the Age of AI." Retrieved from https://academicintegrity.org/resources
  6. Weber-Wulff, D., et al. (2023). "Testing of Detection Tools for AI-Generated Text." arXiv preprint arXiv:2306.15666.

If you want to try our AI Text Detector, please access link: https://turnitin.app/