By Cassie Finley, PhD Candidate in Philosophy, University of Iowa

Since the launch of ChatGPT on November 30th, there has been widespread discussion among educators about what this new technology means for education. OpenAI’s ChatGPT is a large language model, which means that it uses massive amounts of existing written work to generate new writing that mirrors human patterns of language. The result is that a user can ask ChatGPT a question and ChatGPT produces a new, topical, grammatically correct, potentially accurate answer. So, for example, I asked ChatGPT in what ways large language models pose a challenge to education and received the following answer: 

Large language models like GPT-3 have the ability to generate human-like text, which could potentially make it easier for people to create convincing fraudulent or misleading content. This could be a challenge for education, as it may become more difficult for students and educators to discern credible sources of information from those that are not. It is important for students and educators to continue to practice critical thinking and evaluate the credibility of sources, even when those sources are generated by advanced technology.

As ChatGPT said, the general ability to write like a human does not entail the ability to create accurate written work. Beyond this concern, though, within educational contexts the most immediate, obvious problem is that because ChatGPT actually creates new sentences, existing plagiarism detectors are useless for determining whether a piece of writing was written by a student or ChatGPT. This raises the practical concern that the majority of existing writing-based assignments, essays, and assessments done outside of class are vulnerable to outsourcing to this technology. However, this technology also raises more fundamental challenges regarding the purpose and value of assignments and assessments, given that seemingly many of these assignments can be outsourced to technology. 

Accordingly, ChatGPT has elicited a bit of an existential crisis for education, causing educators to radically adjust course assignment structures. To address the practical concerns, some have proposed moving to oral or in-class exams (both of which seem to be particularly rare in philosophy classes), ultimately avoiding these technological problems by removing contexts wherein students could take advantage of the software. Others intend to use one of the various ChatGPT/AI-generated-text detectors, effectively as supplements to existing plagiarism detection tools, and simply expand course policies to specifically prohibit the use of text-generators. 

However, these suggestions do not really engage at all with the fundamental, conceptual challenge to education by ChatGPT. What is the point of having students write something when ChatGPT could write it instead? One suggestion is to actively incorporate ChatGPT into the class structure, analogous to the ways that mathematics courses had to adjust to the invention (and subsequent significant advances) in calculators. For example, because ChatGPT is factually unreliable, instructors might have students ask ChatGPT a research question, then research and fact-check the answer using legitimate, reliable sources. ChatGPT also performs particularly poorly with evaluating its own answers, so I saw one suggestion that students should ask ChatGPT (what would otherwise be) exam questions, then grade and justify the answers based on course content and what the answer should have said. Similarly, someone had suggested in a Teaching Philosophy Facebook group that philosophy instructors use ChatGPT’s answers to philosophical questions in class as a way to encourage students to critically evaluate the philosophical limitations of this new technology.

Following some of these suggestions for explicitly incorporating ChatGPT into the classroom,  I tested it out in two of my philosophy courses. For context, the courses were once-weekly, one-hour, online introductory philosophy classes for students ages 10-13. Prior to the ChatGPT day, we had discussed a variety of philosophical topics including epistemology, personal identity, philosophy of mind, philosophy of time and time travel, ethics, metaethics, and friendship. Having already discussed multiple philosophical questions and potential answers with my students, we spent a class with ChatGPT shared on-screen for all to see, going around asking it various philosophical questions we had discussed within the context of the class already. The goal was to evaluate the responses given what the students had learned over the semester.

At first, the students were a mixture of excited, impressed, and deeply intimidated by the answers from ChatGPT. One student remarked that they didn’t believe they could ever write anything as good as ChatGPT’s answer! However, with a little reflection and guidance, they realized how often the answers failed to say much of anything. Many of the answers described what “some people may say,” combined strings of weak claims without giving reason to support those claims, or failed to take substantive positions at all–all of which students were quick to realize are precisely some of the most important features of a good philosophical answer. 

With a little guidance, students were identifying the problems and limitations of each answer, instead proposing alternative considerations that ChatGPT should have discussed. One question we asked and evaluated that was particularly demonstrative of this philosophical weakness was whether time travel is possible. ChatGPT answered with a few lines about how–“according to science”–time travel is not possible because we do not have the technology or scientific knowledge that would allow us to time travel. Having spent the semester emphasizing the importance of clarifying language in philosophy and a full class discussing David Lewis’ “Paradoxes of Time Travel,” the students were well-versed in how the answer depends on what you mean by “possible;” in one sense, the answer from ChatGPT was correct–as far as we know, it’s not physically possible to travel through time with our current technology. Really, though, that’s not an interesting answer to the question we were asking, since it might be equivalent to saying that at one time it was not possible for people to talk on the phone, since phones didn’t exist. And yet, even attempting to rephrase the question was not going to lead to ChatGPT doing the conceptual work required for distinguishing different senses of “possibility” because it’s precisely the philosophical creativity that considers the meaning rather than just the words themselves that large language models lack. 

Because we had already talked about making conceptual distinctions like this, the students were able to apply their understanding of previous classes’ discussions to evaluate and ultimately articulate better philosophical answers than the merely seemingly impressive answers from ChatGPT. This, in turn, meant students were able to draw upon the philosophical skills and ideas that they had been developing over the semester to collaboratively engage with and challenge this technology. Since it can often be a challenge of teaching philosophy that students are either overly argumentative or still working on developing the confidence to challenge others’ ideas, I found that this activity was particularly useful for practicing healthy philosophical collaboration, given that the students would build upon one another’s suggestions for what different answers should have incorporated. Meanwhile, those who might be less confident to contribute in class out of concerns about not knowing the “right answer” were emboldened to speak up, since the students were already familiar with some ways of answering the questions. Additionally, I think some students’ hesitation to critically engage with philosophical ideas is out of social concerns–that disagreement itself is unkind or likely to hurt another’s feelings. Similarly, some (particulaly introductory) students can be overly deferential towards historical philosophers, which contributes to students’ hesitation to engage critically with those philosophers’ ideas. However, since ChatGPT is neither a “famous philosopher” nor another person in the class, the corresponding hesitations are not present either, which further enables students to practice engaging critically with more confidence. 

Ultimately, technological innovations often disrupt social practices, which forces us to reflect on the fundamental values underpinning those practices. The fact that ChatGPT has disrupted practices in education provides an opportunity for us to really reflect on the purposes of our assignments and how best to fulfill those goals. As mentioned, there are (surely imperfect) 

technologies that may directly address concerns about new forms of plagiarism and there may be ways to avoid students’ opportunities for abusing this technology, but this is also an opportunity to use this technology to innovate in our classrooms as well. For philosophy in particular, this seems like an opportunity for philosophy instructors to gain traction in demonstrating the value of studying philosophy in virtue of ChatGPT’s limitations; it is that much more salient how philosophy cannot be outsourced to technology. 

Feel free to share in the comments how you plan to respond to ChatGPT in your classes!


Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Roman Stein

Nice article. I think Chat is a great first step in advancing AI. My use cases have been in using it for boiler plate code. I do love the idea of students using it as a discussion starter. The only limitation being I don’t think you could do the for the majority of the class over a semester. Awesome work!!