From Deterrence to Adaptation: What Should AI Look like in High Schools?
Chloe is a rising freshman at Duke University who plans to double major in public policy and philosophy. She has been involved with PLATO for two years.
In my AP history class, you’ll see rows of students furiously typing on their laptops. Though you may assume that they are recording notes, if you were to look on a student’s laptop you would actually see vibrant “conversations” with AI models like bard.io or ChatGPT.
At first, almost every conversation about an assignment in my history class seemed to devolve into the phrase, “I’ll just ChatGPT it.” And our Canvas discussion posts confirm this. I would bet money that at least a third of the discussion posts in our classroom consist of staid, robotic writing and include the words “tapestry” and “delve. But what’s even weirder is that I’ve only seen this propensity to use AI to complete assignments in this specific history class. Even in English, which seems the most likely class in which using AI to cheat would take place, there’s no evidence of AI writing.
I wondered why this was the case and realized that it was because our history teacher had completely avoided the conversation about how to use AI. This was a drastic mistake. AI should be introduced — not as answer-generating machines, but as versatile study tools. That’s how our English teacher suggested we think about AI, to fix grammatical errors and guide our study.
As a result, when I applied these suggestions to history assignments, I used AI not to retrieve blocks of information but to create flashcards about the major and minor plot points in various time periods, and to pinpoint the areas in which I needed more focused study. Eventually, my classmates and I realized, on our own, that it was valuable to use AI programs to help us plan and prepare for class by drawing up study plans to improve in specific areas. Using AI in this way even improved the quality of our discussions outside of the classroom.
All these experiences with AI tools like ChatGPT have me wondering about the future of education. If algorithms can churn out essays and poems in seconds, what’s the point of writing them myself? But then I shifted my perspective. What if AI wasn’t only a way to quickly create low-quality assignments, as people have warned me about, but a tool that we haven’t been taught to use properly yet?
This led me to think about other inventions that seemed at first to disrupt education. For example, I participated in Extemporaneous Debate. In this format of Speech and Debate, we are asked to create a seven-minute informative speech about a random topic in 30 minutes. In the past, participants researched and printed out hundreds of articles before each competition. Then the internet appeared, giving us the ability to instantly search for anything on any subject. As a result, did people worry that the fact that debaters could look things up might more quickly would decrease the value of the event itself?
Or think about the release of calculators like the TI-Nspire that can easily solve any calculus problem. When they first appeared, math teachers worried that students would lose the ability to solve equations. Instead, these innovations drove changes in curricula. Now, for instance, AP math exams include sections during which calculators can be used and other sections in which they cannot. As well, speech and debate competitions now allow internet access. Every tech advancement I could think of was not only integrated into education programs but also helped change education. Developments that at first were viewed as major disruptions to education as we knew it turned out to be powerful tools that made education more robust.
If introduced correctly, I think that AI could drastically improve the way that students learn in the classroom. At home, it can save students study time by helping them create study plans, flashcards, and practice quizzes. It can also spur improvements in curricula to become more nuanced. The problem isn’t the tool itself; it’s that no one teaches us how to use it ethically.
Most online “AI study hacks” focus on cheating—avoiding detection and making AI-written essays sound human. If that’s all students are exposed to, that’s how they will use it. I think classrooms should shift their focus away from deterrence to adaptation. AI offers so many possibilities that students can take advantage of. For instance, we can learn tips such as practicing active recall and then asking the program what’s missing from the answer. Or we can also ask the program to quiz us on what we know. These practices can help enrich the classroom experience writ large. Instead of trying to stop AI use altogether, schools should teach students how to integrate it responsibly and ethically into their learning.