Skip to Content, Navigation, or Footer.
Logo of The Middlebury Campus
Tuesday, Dec 24, 2024

AI: Academia on death’s door

In the 1990s, chess grandmaster Gary Kasparov was defeated by IBM’s “Deep Blue” computing system. Many people thought this was impossible — an AI system could never be complex enough to emulate the creative and intuitive processes required in high-level chess. Of course, the final score was 4–2, to Deep Blue.

In 2019, over 20 years later, reigning Dota 2 champion ESports team OG was defeated by OpenAI’s computer program OpenAI Five. Widely considered one of the most difficult video games ever made, it was similarly believed among professional players that an AI could not handle Dota 2’s complexity: A chess player averages about one move a minute; a Dota 2 player, up to two hundred. But when it was released to the public, over the course of 42,729 total rounds against human teams, OpenAI’s Five had a win rate of 99.4%.

There is a trend here. We can all feel a warm breeze; we can all see the dark clouds on the horizon. We are on the precipice of what will single handedly be the most disruptive technological revolution of our time — perhaps of all time. Its implications are ubiquitous, in our case particularly to the Honor Code. And it takes very little critical forethought to come to this realization; so little, in fact, that I assume those who disagree simply choose to do so in spite of overwhelming evidence, largely for their own personal reasons.

To say that a task is impossible for an AI because “only humans are capable of it” is baseless and a unique conceit of human self-importance. It is from this conceit that mankind resisted heliocentrism and evolution, and it may be the same that precludes our acceptance of machines as people in the coming century, à la Blade Runner. The argument that “Humans are uniquely capable of x” has been proven wrong so many times that the question is no longer if a computer could surpass a human in a given field, but when it will.

Hotly debated in our current culture, for example, is the argument over whether AI can create “real art” — but this debate is clearly already decided. In 2022, a photo generated by Dall-E 2 won the Sony World Photography Awards; in the same year the Colorado State Fair awarded first place to a painting generated by Midjourney. In both cases, the judges were unaware that the images were AI-generated. If we can’t tell the difference, what difference is there?

While virtually all cases of AI denial are rooted in the aforementioned human chauvinism, it is also safe to say that they are rooted in fear. And this fear is absolutely warranted. Within the next decade, it is reasonable to expect that there will not be a problem a human could solve that a computer could not. This notion alone has staggering implications for all jobs in all industries, and implications we cannot, or dare not, even conceive of in military applications.

As it pertains to us students, though, much of the academic busywork we do will soon be as redundant as cursive. Soon, there will be no schoolwork that an AI could not complete in perfect emulation of its user. Our academic systems will be compromised by the “smart and lazy” whose impudent resourcefulness will weaponize AI so as to never again lift an intellectual finger. In the coming years, all institutions that refuse to relinquish technology will be forced to stumble blindfolded into the minefield of AI integration.

But AI is much more of a minefield to Middlebury so long as we have the Honor Code. If we were to begin the process of AI integration without forfeiting it, we would have to abandon quantitative performance measures altogether. With imminent AI capabilities, cheating will become logarithmically harder to discern until it is practically undetectable. Having both letter grades that matter and an honor code has always been a prisoner’s dilemma, but now, with AI tools, the risk associated with cheating is near-zero, while the perceived reward of a better future trajectory is, as it has always been, indispensable.

In other words, if Middlebury intends to continue giving grades that matter to students’ lives after college, no honor code will prevent cheating when it will soon be so easy to do so. If Middlebury somehow decided to stop giving grades, then there would be no need for it in the first place. In either case, unfortunately, the Honor Code is rendered impotent, and it soon will be, at best, mutated beyond recognition.

The Zeitgeist 6.0 data on Honor Code violations are egregious to the point of hilarity: 65% of survey respondents reported having broken the Honor Code, almost double from 2019. “Any infraction of the honor system is normally punishable by suspension from the College” (Honor Code, Article III, Section E) — what does one make of the notion that two-thirds of the student body is currently eligible for suspension?

Additionally, the prevalence of AI-related Honor Code violations has also doubled since last year, up to one third. Look around you — one out of every three people you know is using AI. Maybe their grades are better because of it, maybe not. But what is certain is that in the efficiency of AI tools they are saving time, more time than you — time with which they are starting that new club, trying that new hobby, meeting those new people — all while you rot in the library on principle. And when it’s all over, you’ll both be employed, because this is Middlebury. Though one of you may have had a slightly better time getting there… will that be you?

Irrespective of what we do, AI is imminent. Want it, fear it, love it, loathe it — but do not ignore it.


Comments