Tina Xu, xLab Project Lead and Computer Science PhD Candidate
AI tools like ChatGPT, Gemini and Copilot are now part of everyday academic life. Students use them to brainstorm, clarify concepts, generate examples, improve writing, debug code and turn rough ideas into structured drafts. In the best cases, AI acts like a fast tutor, helping students move from confusion to clarity. In the worst cases, it becomes a silent substitute for thinking, producing polished output while reducing the mental work that builds durable skill.
This tension is becoming one of the defining challenges in higher education. Students need AI literacy and productivity, but learning still needs to remain authentic, and assessment still needs to remain fair. A full ban doesn’t match reality or prepare students for modern work. At the same time, “anything goes” creates inconsistency, confusion and avoidable conflict, especially when instructors are asked to evaluate mastery but can’t easily tell whether a submission reflects student reasoning, AI generation, or a mix of both.
A recent MIT Media Lab preprint makes the learning side of this tradeoff harder to ignore. In controlled writing sessions comparing three conditions, writing without tools, writing with a search engine and writing with an LLM assistant, the study reports systematic differences in cognitive engagement. EEG-based analysis found that tool use tracked with reduced brain connectivity during the writing task, with the brain-only group showing the strongest and most distributed networks, search users showing moderate engagement and the LLM-assisted group showing the weakest connectivity. The study also reported lower self-reported ownership and weaker ability to accurately recall or quote what was written among LLM-assisted participants. The authors frame this as “cognitive debt”: the convenience of an AI assistant can reduce the cognitive effort that produces long-term understanding. (arXiv)
This is not an argument against AI. It is a call to design AI use in education so it strengthens learning rather than replacing it, and so it reduces conflict rather than increasing suspicion.
The real problem schools need to solve
When AI shows up in coursework, the hardest issue is often not AI itself, but uncertainty. Instructors may not have clear evidence about how a submission was produced. Students may not have a consistent way to communicate what tools were used and what role those tools played. In that gap, trust gets strained, policies feel inconsistent across courses and the conversation shifts away from learning toward enforcement.
Post-hoc “AI detection” has not solved this. Even when detectors are well-intentioned, they can be disputed, inconsistent across writing styles and populations, and socially costly when used in high-stakes situations. When trust is on the line, schools need a path that is more reliable, more consistent, and less adversarial.
That is where transparency-by-design becomes important.
The solution direction: watermarking for AI-generated text
Watermarking embeds a subtle, machine-detectable signal into AI-generated text at the moment the text is produced (when watermarking is enabled). To a reader, the writing looks completely normal, but a verifier can test whether the watermark signal is present. The purpose is not to punish AI use or label students. The purpose is to make transparency technically feasible, so academic norms can be clearer and avoidable conflict can be reduced. That is the direction xLab has been focusing on.
With watermarking in place, the campus dynamic can move away from “guessing after the fact” and toward consistent, auditable signals that can be handled through clear policy. This supports healthier workflows for both students and instructors. Students can still use AI responsibly for explanation, critique, revision guidance and practice, while remaining accountable for understanding and mastery. Instructors can set expectations that match course goals and spend less time in disputes about provenance. Most importantly, this kind of transparency aligns with what the MIT findings point toward: learning improves when students stay cognitively engaged, and AI should be used to amplify thinking, not replace it.
The skill that will matter most: thinking + tool use
The future is not only about competing with other humans. It is also about operating in a world where AI can produce output instantly. In that world, the differentiator is not who can generate text fastest; it is who can think clearly, ask the right questions, verify claims, make good judgment calls and use tools strategically.
Knowing how to think is the foundation. Knowing how to use AI without giving up that thinking is the multiplier.
That is what higher education is being pushed to confront right now. The goal is not “AI or no AI.” The goal is building students who can use AI to amplify learning and performance while staying cognitively engaged, because that combination is what separates people who grow with AI from people who are replaced by it.