“Middlebury College is grappling with the implications of the AI tool ChatGPT for academic integrity, offering workshops for faculty to learn about the tool and decide whether to embrace it in redesigned assignments or add policies banning it to their course syllabi.” That’s the response ChatGPT provided when prompted to “write a one-sentence summary for this article.”
Its first obstacle in creating that summary was the article word count — ChatGPT currently only accepts 500-word inputs. Then it directly copied phrases from the article without alterations. But what it ultimately produced seemed both accurate and compelling, epitomizing the challenge that the new AI tool poses for the world of academia.
ChatGPT, an artificial intelligence tool trained on text data and human reinforcement, can generate conversational responses to a range of requests like this. It can answer essay-like questions about historical events, suggest solutions to coding problems or draft a reply to an email. Released to the public in November 2022 by AI research company OpenAI, ChatGPT exploded to a record-setting 100 million active users in just two months.
On college campuses, ChatGPT’s sudden debut confronted faculty with a tool that seems to be simultaneously the future of technology and an inexhaustible cheating machine. Few colleges and universities have banned the tool, but several major K-12 school districts have restricted access to the site on school networks, citing concerns about plagiarism and inaccurate information.
Planning for a semester with ChatGPT
At Middlebury, some professors started to address ChatGPT during J-Term. As the spring semester neared and the tool grew in popularity, however, more faculty members started considering what ChatGPT meant for their courses. They approached the new tool in a variety of ways — some embraced AI by redesigning assignments while others added policies banning ChatGPT to their course syllabi.
“There were a number of faculty saying, ‘I want to ban it. I don't want this in my class. It either threatens the integrity of the work that I expect students to do, or I'm just not kind of ready to understand its use in my class,’” said Associate Provost for Digital Learning Amy Collier, who was involved in organizing workshops on ChatGPT for faculty. “And within that context, we talk about things like, what does it look like to ban it?”
These workshops — run jointly by the Office of Digital and Learning Inquiry (DLINQ), the Center for Teaching, Learning and Research (CTLR) and the Writing and Rhetoric Program — were offered as a resource for faculty to learn about the tool and explore the implications of generative AI for their teaching. The first was held in late January and the second in late February, with plans for more during the spring semester. DLINQ also offers one-on-one consultations for faculty to discuss their expectations and visions for technology in the classroom, including ChatGPT.
Middlebury has not implemented an institution-wide policy on the use of ChatGPT or other AI tools, instead leaving the question of permission or prohibition up to individual professors. The college handbook mandates that the honor system is reviewed at least once every four years, and ChatGPT may be a topic of discussion in future reviews.
“Right now, we are in a moment of learning about ChatGPT, and more broadly, the potential pitfalls of AI and academic work,” said Jim Ralph, interim vice president for academic affairs and dean of the faculty. “After reflection and discovery we will have to see if we are going to make any adjustments. I would think that we would do so only very deliberately, and at this moment, I don't see a compelling reason to do so, in terms of a broader college policy being issued this semester.”
Associate Professor of Writing and Rhetoric Héctor Vila, who also helped organize the faculty workshops on ChatGPT, said that the tool’s introduction was a concerning moment for many educators.
“When ChatGPT became available to the public, really the first people in a kind of shock were teachers and academics,” Vila said. “Like, ‘oh no, all the students are going to cheat, there goes the essay, what are we going to do?’”
After that initial shock, DLINQ, CTLR and the Writing and Rhetoric program put together the first faculty workshop to start working on the questions that professors were raising about the future of ChatGPT in their classrooms.
“It was like, ‘This Is the end of times,’ or ‘what do I do about it?’” Vila said.
Embracing ChatGPT — and transparency
Vila noted that many faculty — including himself — did not see ChatGPT as wholly threatening to their work. For faculty members who want to limit its use, there are tools like GPTZero, which scans text and calculates the probability that the text was written using AI.
Other professors added policies that required students to disclose when and how they used ChatGPT without outright banning it. Vila created an assignment where students write on their own, and then compare it to what ChatGPT outputs.
“I told them, if you find a line or even have a whole paragraph that's better than yours, go ahead and put it in your essay, but cite it ChatGPT,” Vila said. “But then I asked for a little bit extra for them — that at the end of their essays, they write a little reflection of what it was like working with this AI. Was it a challenge, or did they find the writing helpful? Where do they see themselves working with this tool?”
Associate Professor of Political Science Sebnem Gumuscu also said that she did not ban ChatGPT, but rather asked for transparency from students. However, she noted that the writing ChatGPT produces falls short of the quality of work that Middlebury expects from its students.
“This is good from a writing point of view — it is structured well, there are good sentences there,” Gumuscu said, referring to a writing sample produced by ChatGPT. “But then you think about the substance. It's really weak. Shallow. Stale. And it's superficial, boring, not creative. Clearly not human.”
Other limitations, like the fact that ChatGPT is not trained on data from after September 2021, means that it is not useful for content on all assignments. Gumuscu, who assigns op-ed style writing assignments on current events, said that when she prompted it to write about the February 2023 earthquakes in Turkey and Syria, it responded with information about the 2020 earthquakes in the region instead.
Reimagining coursework
Many professors saw ChatGPT as an opportunity to reevaluate how students are assessed. Some adjusted assignments to formats that make using ChatGPT more difficult, such as reintroducing written exams or changing exam content.
“We've been thinking about doing this for a while, but it kind of helped bring an additional incentive to shift the exams to being less multiple choice to more applying concepts to different scenarios,” said Assistant Professor of Biology Eric Moody, who is teaching a section of Ecology and Evolution (BIOL 0140) this semester. “Not only do we think it helps emphasize the learning goals better, but it also prevents students just typing in something to ChatGPT and potentially getting an answer.”
Collier said that one of the opportunities of education is the continuous process of refining what learning and demonstrating learning means.
“ChatGPT is creating tension around that. The things that we come to expect students to produce to show their learning are things where ChatGPT could play a role right now,” Collier said.
Moody and Associate Professor of Geography Jeff Howarth both noted that they teach classes with significant coding components that are not the main focus of the subject, and ChatGPT could allow students to focus more on the central questions rather than building the tools to evaluate them.
“If we got to the point where I didn't have to teach them the code, but they could put in a natural language expression and you could make software without having to code — yeah, I think I would be okay with that,” Howarth said. “Because that would just shift me to like, looking at the application.”
“I actually think it's helpful because I don't test students on their ability to write the code, and in fact, learning how to write it is one of the biggest struggles in my class,” Moody said.
Faculty are not alone in their varied approaches to ChatGPT — students, too, have been navigating the role of the new AI tool in their lives.
Tim Hua ’23 said he used ChatGPT for coding in R, a programming language typically used for statistical computing, and was impressed by its rapid and accurate output. Functions that could take significant time to figure out independently were easily generated by ChatGPT.
“It’s like, oh, I just spent 40 minutes making that function myself,” Hua said. “So that's really convenient.”
Hua described the chatbot like having a tutor, because it helps provide answers and can explain why the code works.
“I think of it, at least right now, like having somebody who's really experienced sitting next to you,” Hua said.
The process of adapting a course to the realities of ChatGPT, however, requires work from educators.
“It's an opportunity that I think feels very overwhelming,” Collier said. “Because it's work, it's stuff that we have to make time for.”
Associate Professor of Computer Science Ananya Das highlighted that it is a particularly challenging moment for educators, not just in higher education but also in K-12 schools. The Covid-19 pandemic pushed teachers to adapt their curricula to remote learning, and as schools have returned to in-person instruction, ChatGPT adds another jolt to the system that educators need to work into semester plans.
“I think it’s going to be up to educators to make the best out of this, and that’s a hard thing to ask educators to do right now,” Das said. “We’ve been dealing with a lot of changes, and this is another set of challenges for us to work through.”
But Das also said she was considering how ChatGPT can be incorporated into her courses, especially when she teaches Artificial Intelligence (CSCI 0311) in the fall.
“I’m excited to see what I can do with it, and how I make my courses better,” she said.
Academic integrity
While professors expressed concerns about how ChatGPT would impact cheating at Middlebury, most also believed it ultimately falls on students to take responsibility for their learning.
“I think my general attitude is to encourage self-restraint and self-governance,” Howarth said. “At the end of the day, it's the same thing I do with my kids — try to get people to make good decisions. I don't want to make the decision for them.”
Moody and Howarth said that, as with other resources used while writing or coding, ChatGPT has to be cited. As long as students are honest and do not misrepresent it as their own work, it could be treated like other sources of evidence.
“In that way, I don't find it any more threatening than a book in the library,” Howarth said.
Students have also found uses for ChatGPT in their lives outside academics, such as in writing networking emails and drafting cover letters. Tracy Himmel Isham, associate director of professional and career development for the Center for Careers and Internships (CCI), noted that these uses are not much different from older forms of writing based off of templates, but have notable weaknesses in a job search.
“We used to have a cover letter template, but what we found was that when they all got bundled in Handshake and went to the employer, you had kids who had written the same cover letter,” Himmel Isham said.
She noted that previous changes in technology have also changed the nature of the job search. Whereas students could once write a single resume and cover letter to send to several employers, that type of untailored approach no longer works.
“Over the last decade, we've talked a lot with students about developing your story. And so that's going to be where, you know, ChatGPT doesn't know you uniquely,” Himmel Isham said. “You are unique, and people see through [you] if you're not telling specifically why or how you're qualified with your own context to stories.”
The CCI’s code of conduct, which all students have to sign to access the Center’s resources, requires students to represent themselves accurately in all job searches. While the CCI does not currently ban students from using tools like ChatGPT, Himmel Isham noted that students should remain mindful of how they rely on tools like this.
“When we think about what our code of conduct is, it's about representing yourself properly,” Himmel Isham said. “And that includes how you demonstrate that through your applications, your resume, your cover letter.”
AI and ethics
Professors compared it to other tools that — when first introduced — raised similar questions about the role of technology in academia. When Wikipedia launched in the early 2000s, scholars highlighted similar concerns about plagiarism and erroneous material on the website. Tools like Grammarly, which checks for a range of grammar, style and punctuation errors, provide writing aid that most faculty permit.
But faculty saw similar problems with ChatGPT as with the introduction of other tools that toe the line between original and aided work. In past Zeitgeist surveys, students consistently reported using websites like SparkNotes to summarize reading assignments as one of the most common ways they break the Middlebury Honor Code. ChatGPT, which can perform basic research, edit grammar and synthesize key takeaways from readings, brings all of these tools together — in addition to generating essay-length responses to prompts from scratch.
Moreover, the introduction of AI into classrooms has raised issues of equity. In February, OpenAI introduced a subscription service for ChatGPT that offered priority access during peak times and faster responses.
“I've seen Grammarly divide a student body between those that can pay for the top model versus the free model,” Vila said. “ChatGPT has now rolled out the paid model, so it's $20 a month if you can pay — but who’s going to pay $20 a month?”
As with earlier and other applications of AI, the biases of the human world leak into the tools trained on real-world data. Large language models like ChatGPT train on a massive input of language data, which includes the violent, racist and sexist messages that people share on the internet. Meta shut down its large language model, Galactica, after three days because it produced biased and incorrect information. Earlier versions of OpenAI’s large language models — which have also been publicly available but cost money — ran into similar problems.
OpenAI made pioneering steps towards reducing these types of biased outputs before releasing ChatGPT, but subtle and overt forms of bias are still generated by the chatbot. The company has also come under criticism for its use of Kenyan laborers earning less than $2 per hour to comb through the most graphic material on the internet as part of the training that reduced ChatGPT’s harmful responses.
Collier spoke about these issues in a faculty workshop on ChatGPT and noted that these are concerns that repeatedly arise with AI.
“A real question that you'd be asked about any AI application across our cultural and social contexts is, who is benefiting and who was harmed?” Collier said. “Are we perpetuating the harms that, you know, that we don't want to see?”
To Collier, educating students about how to responsibly use artificial intelligence — in the classroom and beyond — is central to this moment.
“It's going to be part of our lives, whether it's part of their courses here at Middlebury or not, and if it's going to be part of their lives, I want to help them make good decisions,” she said. “I want them to understand not only when it's appropriate to use such tools or not, I also want them to understand what's behind those tools.”
Looking ahead
While ChatGPT is only the latest in a long series of developments in large language models, its success has attracted immense attention and investment. Microsoft has integrated OpenAI’s models into its new Bing Chat — which further illustrated the shortcomings of AI chatbots — and Google has plans to roll out a similar AI search feature in the near future. OpenAI released ChatGPT to improve the technology through feedback, and the massive input it has received in recent weeks is likely to improve new generations of large language models.
Das said that the introduction of ChatGPT created challenges but was an inevitable step in the field.
“Even if it didn’t happen now, it would happen in a few years — software like this was bound to be introduced,” Das said. “There's going to be so much more technology that we're going to have to kind of adapt to as educators. I think we're just going to kind of learn how to use these tools to enhance our overall learning.”
Questions about plagiarism and intellectual property linger, as ChatGPT inevitably mimics the work that it was trained on.
“For academia, what we're really focused on is you have to give attribution, you have to do your research. You want to have a clear thought that’s your own, you want to develop something through hard work and persistence,” Vila said. “ChatGPT is challenging all that.”
Some professors see preparing students for the future of AI as a key learning goal of the moment.
“My reaction has always been, if the students especially are going to be running up against this, then it's our job — our responsibility — to know what this thing is,” Vila said.
Part of that responsibility is teaching students how to critically evaluate the role of AI as it develops in the coming years.
“We want them to have frameworks for decision making going forward, where they're not just driven by whether or not it's prohibited,” Collier said.
Yet many faculty members do not expect ChatGPT — or anything that will be introduced in the near future — to supplant or surpass the types of work students do. They emphasized that it is a privilege to attend Middlebury, and that it is ultimately up to students to learn and take ownership of the process of their education.
It remains an imperfect tool — the one-sentence summary at the beginning needed a few iterations to produce something that wasn’t just a copied-and-pasted paragraph from further down. It doesn’t follow The Campus’ style guide, and didn’t understand the initial requests written with journalistic jargon to “write a lede” or “write a nutgraf.” However, if it were given quotes pulled from interviews and intentionally crafted requests, it could have written reasonable paragraphs for this article or for any academic essay.
“I don't want to underestimate those continuities with all the challenges that come with technological changes, investments in learning and teaching,” Gumuscu said. “But I also want students to understand we have come this far because we trusted in human intelligence.”
Tony Sjodin ’23 is a managing editor.
He previously served as community council correspondent, senior writer, news editor and senior news editor.
Sjodin is majoring in political science with a focus on international and comparative politics. He previously held internships with the Appalachian Mountain Club's Outdoor Magazine, political campaigns in Massachusetts and Vermont, and the U.S. Embassy in Costa Rica's Environmental Hub. Outside of class, he leads kayaking and hiking trips with the Middlebury Mountain Club.