Skip to content

ChatGPT Use in the Classroom

by Dr. Cora Olson and Nataliya Brantly

 Is ChatGPT the greatest thing since sliced bread for education and research or an instrument for cheating? Is it a tool for learning or a way towards ignorance and illiteracy? Is it an indication of progress in contemporary society with endless opportunities for automation or one of the early signs of devaluation for student-acquired knowledge and skills? These questions seem to imply an answer while they also register the moral panic currently surrounding ChatGPT in academia. In this essay, we hope to shift away from the moral panic to ask a bigger question: How can we, as educators, use chatbots to stimulate deeper engagement with materials and avoid placing it into the seemingly easy binaries of the moral panic? ChatGPT and AI chatbots like it (Google’s AI chatbot Bard) present us as educators with a high level of uncertainty, what will or could futures with chatbots look like? Here we share some of our reflections about ChatGPT use in our classroom.

This Spring 2023 semester 69 undergraduate students, divided between two different classrooms, were asked to submit the first course assignment, a 500-to-600-word essay. The assignment prompted students to reflect on two course materials covered in the classroom while critically engaging the arguments made by authors and stating their own position. For this assignment, students were given two options. The first option was to submit an original piece of work. The second option (preferred and with a slight grade benefit) had three steps: submit the essay produced by ChatGPT using an assignment prompt, submit a student-edited essay based on the one ChatGPT produced, and submit a one-paragraph reflection on the experience with details of what changes student had to make. Without prior experience with the chatbot or preconceived notions, we were quite surprised with the outcome.

Let’s look at the numbers. Prior to the assignment, of 69 students only 3 students claimed to know what ChatGPT was. However, we recognize that students have used services similar to ChatGPT in the past, like Chegg. Chegg is a multiplatform academic service provider. Chegg services range from general homework help to textbook production. Likewise, last semester, one of us overheard a student discussing AI-generation as way to complete assignments for the class. As experienced instructors taking an oppositional stance to ChatGPT seemed limited and to lack imaginative capacity.

 The general breakdown of submitted assignments is as follows: 17 students or 25% submitted the assignment using the ChatGPT option, 45 students, or 65% submitted original work without the use of ChatGPT, and 7 students, or 10% did not submit the assignment. It might seem surprising to have only 25% of students use the ChatGPT option given an opportunity to earn a slightly higher grade. After grading all the essays, it became clear why so few decided to go the ChatGPT route. First-time use of ChatGPT chatbot might be daunting, confusing, and out of one’s comfort zone. AI chatbot use for an assignment might be fascinating and thrilling, especially considering its use within the safety of the classroom with the professor’s encouragement. At the same time, it might be quite scary and might make one feel like a fraud, to use ChatGPT which is banned in a European University, New York City public schools and banned or frowned upon by so many in education. Additionally, ChatGPT use in most instances is viewed as a violation of the student’s honor code. AI chatbot use could be a hindrance to one’s original thought or a push to get the creative juices flowing. The below reflections regarding ChatGPT use are based on observations of how ChatGPT met the requirements outlined in the assignment prompt. The surprising lack of uptake may stem from both the difficulty of using ChatGPT proficiently and internalized moral stigma surrounding this particular type of technology-use and academic work. Some students may not be ready or willing to engage with AI generators like ChatGPT.  

 Quality of writing

ChatGPT generated a well-written essay with clear, vivid, and strong writing. It was grammatically accurate, without errors or duplicated sentences within an essay, and it addressed the correct subject in question. In most instances, ChatGPT-produced writing was of better quality than that of most students, especially in terms of sentence structure and grammatical accuracy. Sentences progressed in a logical manner according to the prompt and the course materials chosen by the student. While ChatGPT was able to accurately summarize and connect two course materials in a coherent essay according to the specified word limit, it failed to satisfy the requirement for personal reflections, the student’s position on the argument or to bring new ideas to the arguments made in the course materials. This advanced technology was able to produce nonduplicative essays in seconds, but 100% of the ChatGPT-produced essays did not provide personalization or satisfy the critical components required. ChatGPT was incapable of the type of human critical originality that the class requires for A-quality work.

In the assignment, students were asked to identify elements of the author’s argument that were particularly vexing or generative and construct their own thesis to critically evaluate or extend the argument beyond a simple agreement or disagreement. The thesis was a significant component of the assignment that defined student’s grade along with other criteria. The lack of personalization in the ChatGPT-produced essays automatically placed students at a disadvantage and a grade of B or below. This shortcoming indicates that ChatGPT was not able to capture the depth of students’ understanding and capability or fully mimic human intelligence to the level required by the assignment. While ChatGPT generally followed the structural format of the assignment rubric, it missed a critical component based on one’s lived experiences and related supporting evidence. In other words, students could use ChatGPT but they could not avoid having to engage with the concepts if they wanted a grade higher than a B because AI-language generators lack the capability to build original positions. Students still had to think.

 Plagiarism and Originality

We were particularly interested if ChatGPT-produced essays, using the same prompt by multiple students, would show duplicated content. We observed that with repeated use of ChatGPT for the same assignment and course texts some writing patterns and some sentences were repurposed. For instance, a single sentence highlighting the main arguments of the author and sentences summarizing the course text have been produced with little variation. Some arguments were reframed but conveyed the same idea. The university’s plagiarism support software did not catch most of the AI-generated essays. However, t
he production of similar material among some students using identical course tests triggered the Turnitin software warning. That is our AI for catching cheating had to learn from multiple ChatGPT attempts prior to triggering. One of the shortcomings we can identify is the fact that our sample size was rather small with some variability of course texts used by different students. This might be indicative of just how difficult it would be to discern ChatGPT-produced writing for a single student in the pool of original works submitted by others. And, is another argument for re-thinking how can we use ChatGPT and other AI-genators to get students to engage with course materials.

Some students might feel the grade B for this assignment for ChatGPT-produced essay is sufficient. In this case, students had to spend little time and effort on an essay that captures most of the requirements outlined in the assignment currently. Yet, for those that desire a better grade, there is a need for substantial revision of the ChatGPT-produced essay or a piece of original work. Revising a ChatGPT-produced essay might be more time-consuming to ensure the writing reflects students’ position. Reliance on a ChatGPT could lessen the work one is willing to do for university assignments. However, we could also revise our rubric to reflect the necessity of original thought to receive a grade higher than a C or D. This might have the effect of allowing space for ChatGPT use in drafts and requiring original thought for final submissions. This shift would prioritize original thinking for higher grades and allow students to optimize their prose through ChatGPT.

 What our classroom experience demonstrates is that ChatGPT is capable of producing grammatically sound work, but it failed to incorporate human points of view, personal positions, or experience into the work. Further, the inability of the plagiarism software to detect AI-produced work poses a significant challenge to academic integrity if we conceive of ChatGPT as a “cheating” technology. Wild ChatGPT or ignoring ChatGPT use jeopardizes the value of students’ work and student’s creativity. This disengaged ChatGPT use provides an additional opportunity for deception in academia that is openly available at no cost to those willing to use it to possibly violate the student’s honor code. However, structured use of ChatGPT and refinement of course rubrics and policies concerning ChatGPT provide space for thinking about ChatGPT as a tool for engagement.  It is clear there is a need for more testing and evaluation of ChatGPT in an academic setting as educators work to build critical thinking among students and AI chatbots are becoming more sophisticated. Perhaps, a place where we can start as educators is asking how can we use ChatGPT in ways that enhance and prioritize originality and critical thinking? For us, we know this will include modifying our assignment rubrics, making students tell and show us how much of their content was generated using AI, and citing which AI-chat bot was used. Tech for humanity begs us to think through how we can engage with technologies and humanity simultaneously, it might also require us to re-think how we teach the humanities with technologies.

Cora Olson, PhD, Collegiate Assistant Professor
Department of Science, Technology & Society
Affiliate Faculty in Women’s & Gender Studies
Co-organizer, Critical (STS) Pedagogy Group

Nataliya D. Brantly, MPH, MBA.
PhD Candidate, Science and Technology Studies
Department of Science, Technology, and Society
Virginia Polytechnic Institute and State University

Leave a Reply

Your email address will not be published. Required fields are marked *