The academic integrity debate around generative AI has largely been framed as a binary choice: ban it or ignore it. Neither approach is working. A more productive question is whether AI can be brought into the pedagogical process in a controlled way — one that actually strengthens academic integrity rather than undermining it.

Retrieval-Augmented Generation (RAG) offers a compelling answer.

The Problem with Current Institutional Responses

Most institutions are oscillating between three positions: outright prohibition, detection-based enforcement, or studied neglect. Each has significant weaknesses.

Prohibition is increasingly unenforceable. Detection tools are unreliable and create adversarial dynamics between students and faculty. Neglect simply defers the problem while student practice continues to evolve.

Beneath all three approaches is a confusion that rarely gets addressed directly: students often cannot distinguish between acceptable AI support — editing, summarisation, explaining a concept — and impermissible authoring. Many universities equate all AI-generated content with ghostwriting, yet their policies lag years behind actual student behaviour.

The result is a policy environment that is simultaneously too rigid and too vague to be effective.

What RAG Actually Does

Retrieval-Augmented Generation is a technical approach that constrains what an AI model can draw on when generating responses. Instead of accessing the open internet or its full training data, a RAG-configured system retrieves information only from a defined knowledge base — in an educational context, that means course readings, lecture materials, and syllabus-specified sources.

Research shows that RAG systems substantially reduce hallucinations and are relatively straightforward to implement for institution-specific needs. More importantly for educators, they shift the AI’s role from an unconstrained answer machine to a course-aligned learning assistant.

Three Forms of Control RAG Provides

Content control: Only specific readings and course materials are accessible to the model. A student using a RAG-configured tool cannot simply ask it to write their essay — the model is working from the same literature they are supposed to engage with.

Scope control: The system can be configured to function as a tutor or explainer rather than an essay writer. It can help a student understand a concept from the week’s reading, not produce a submission.

Evaluation control: Interaction patterns become visible learning artefacts. The dialogue between a student and a course-aligned AI system can itself be assessed — revealing whether genuine engagement with the material took place.

Practical Implementation Without Custom Infrastructure

Most institutions do not have the technical infrastructure to deploy full RAG systems immediately. But educators can approximate the core principles using standard tools available today:

  • Require students to upload only module materials when using AI tools — no open internet searches, no external sources
  • Mandate submission of prompts, AI dialogue excerpts, and written reflections alongside final work, making the process visible
  • Design assessments where AI assists initial work but students must critique, challenge, and extend it — the value-add is human, not AI-generated
  • Embed human verification through oral defences or in-class writing components that test whether the student can independently articulate what they submitted

A Four-Part RAG-Informed Framework

For institutions ready to move from principle to practice, the framework has four components:

  1. Define permitted knowledge bases explicitly — which readings, which sources, which materials the student is allowed to draw on through AI
  2. Clarify which AI roles are permitted versus prohibited — explanation and summarisation yes, full essay generation no
  3. Require process documentation — evidence of how AI was used, what prompts were submitted, what the student accepted or rejected
  4. Embed human verification — oral defence, in-class writing, or a follow-up session where the student must demonstrate understanding without AI support

Evidence from Practice

Studies from psychology courses, programming education, and medical training demonstrate consistent findings: when RAG-informed systems are implemented thoughtfully, students report higher satisfaction and deeper engagement with course materials than with unrestricted, internet-connected AI tools.

The reason is intuitive. A constrained AI system pushes students back toward the actual course content. It becomes a scaffold for engaging with the material, not a shortcut around it.

The Philosophical Shift

The real argument here is not primarily technical — it is pedagogical. The question should not be whether to use AI, but how to use it responsibly. That shift moves institutions away from detection-based enforcement, which is adversarial and unreliable, toward assessment design and AI literacy, which are sustainable.

When the system is designed well, the student who uses AI within the defined framework ends up more engaged with the material — not less. That is the outcome academic integrity policy is supposed to produce. RAG-informed pedagogy offers a principled, practical path to getting there.


Originally published on LinkedIn Pulse, November 2025.

AG
Dr. Alan Go
DBA · Fractional Education Leader · Rise Education Management

Dr. Alan Go has 30+ years of senior executive experience in Singapore's private education sector, including roles as COO, CEO, and Academic Director.

View Full Profile
Back to All Insights