I love learning, and AI is an excellent tool for that—though we must be wary of hallucinations and verisimilitude. I love sharing what I learn, and being a teacher suits that. With the arrival of LLMs, however, learning and teaching are changing. The future might be bright or dark; it’s too soon to say. The transition to that future will, however, be tumultuous, and I fear the virtue of honesty will have been compromised in the process.
The instrumental value of a degree is a signal indicates a student:
- was accepted/filtered at the start;
- was conscientious and followed directions for four years;
- learned appropriate knowledge and skills;
- was ranked/filtered at the end.
Item 3, actual learning, is the purported goal of higher education, but it is not necessarily correlated and is often at odds with or overshadowed by the other items. This doesn’t mean higher education was without merit. It provided a moment and framework in which learning was encouraged and facilitated. And a college degree had a significant instrumental value in signaling someone’s potential relative to others. At the scale of dozens or hundreds of students in the classroom, the institution worked.
Is traditional education close to ideal? No. The stimulus to learn and its assessment—via essays, research papers, projects, and exams—served well enough for the past century. Having a designated time, peers, and proctor motivates many people. However, given that AI can already complete traditional assignments better than most people, traditional pedagogy is insufficient. Curiously, we need AI-based learning (potent, customized, and patient) to teach students who can then offer value beyond what AI can. Optimistically, we might figure out how to use AI to create educational experiences that deliver learning objectives customized to the learner. If we care about critical thinking, for example, it will no longer suffice to claim an essay assignment does that. We’d want a specific critical thinking assignment. (In all of my courses, I have critical thinking content, with a set of ten exercises done throughout the semester, but I’ve no evidence it substantively improves critical thinking.) Optimistically, everyone will be able to contribute and benefit at whatever level they are capable of. Pessimistically, higher education will only be worthwhile to the few students who can use AI-boosted learning to offer value over what AI itself can do.
There are optimistic and pessimistic long-term implications for society as well. I don’t know if an AI self-improvement loop will kick in during 2027 or 2035. I don’t know if society will be disrupted in five years or ten years after that. When it does, however, we’ll be lucky to have a post-scarcity society like Star Trek, where everyone can be educated to the degree needed to realize their highest calling. Less fortunately, we’ll have WALL-E, where the majority know only what they need to operate their screens and floating recliners. If unlucky, we’ll have ever-widening disparities, where AI is aligned with the interests of the powerful, and it’s dystopia for everyone else.
In the shorter term, though, because LLMs are already capable of the many tasks we ask students to do, disallowing students to use AI will foster a psychology and culture of dishonesty that will extend beyond college assignments. I’m holding the line presently with “query, don’t copy” and AI transparency policies, but in two years, that line will give way. Undergrads will then have spent high school using AI and lying about it. Course modifications, such as oral exams or writing in class, will be irrelevant to the need and inefficient at scale. Hacks will be counterproductive and circumvented—bright students already know to avoid em dashes and to obfuscate AI prose. In a few years, agentic AI will be able to navigate one’s computer and type in a document from outline through drafts. (I suspect I already have students typing in ChatGPT output.) I fear we will not yet have had the necessary reconfiguration of education and will, instead, have created a generation of normalized dishonesty.
- 2025-06-18 Update: See discussion on Reddit r/professors.
Comments !