The Moral Imperative to Teach AI
Educators will determine whether GenAI is pernicious or a panacea
This was a serious “wtf google?!” moment for me. Here I was, searching for third-party testing of protein powders because I don’t want heavy-metal poisoning and my results included an advertisement to use the Google app to cheat on my homework. There’s not even a pretense of learning involved - just an easy way to get answers.
This frustration and outrage is familiar and tiresome to educators. Not that cheating is new, but the ease with which it can be done is alarming. It’s no wonder that across educational ecosystems the topic of AI most frequently centers around how to prevent cheating and figuring out new assessment formats. In short, the question seems to have become “How do we keep AI out of the classroom?”
This is a mistake.
Not because the ship has sailed and students are going to use AI anyway (although that is true), nor because AI is the pathway to human superpowers (it will enable incredible things, but superpowers are not among them). It’s that, like all technology, the profound benefits come paired with costs, and with the great abilities of AI come profound costs to mindless use.
Just as using search engines results in the “google effect”1, the use of generative AI is already demonstrating the ability to impact reasoning, learning, and critical thinking. AI will bring phenomenal change, but without proper instruction, it will also bring a tsunami of cognitive decline.
Educators have a moral imperative to teach their students how to use AI.
Adaptation to technology: a tale as old as humankind
Societal transformation due to widespread technology adoption has been happening throughout the history of humans. We adopt each transformative technology for the immediate benefits while fretting about the downstream consequences. Those who adopt early see the biggest short-term rewards while facing the risk of being the first to discover the consequences2, and typically the benefit and the risk are inextricably linked.
Let’s take the tractor as an example. Agriculture had already seen massive gains in efficiency through the use of livestock as labor. But maintaining livestock is itself physically grueling work, so while there were gains in the amount of land a farmer could manage there was still a lot of physical labor required by the farmer.
The tractor brought in a cascade of changes. Operating a tractor is significantly less physically demanding than working a team of oxen, and tractors are capable of aiding in a wide variety of tasks, from plowing, to planting, to harvesting. This has been a great benefit to farmers not only by increasing their productivity but also by significantly reducing the rate of injuries3. Of course, this decrease in injuries also comes alongside a significant decrease in physical activity, which in turn has been a driver of increasing rates of obesity amongst farmers (as with the rest of the US population).
On the individual level, farmers adapted and no longer had the musculature development that came with more manual methods of farming. On the population scale, farmers have seen an increase in obesity that is similar to that of the US population, though given that farming remains more active than a desk job their rates remain below the population average.
Adapting to internet-based technologies looks different because the type of work it replaces is different. But just as taking away physical labor decreased physical conditioning amongst farmers, decreasing cognitive labor can decrease cognitive capacity amongst knowledge workers.
Starting from here, now
How we adapt to technology is a function of what the technology does well. The rise of GPS-based navigation to near ubiquity has generally impaired our spatial cognition (though interestingly does not seem to impair one’s sense of direction)4. The rapidity and ease with which google finds information has resulted in a shift from remembering the knowledge we find to instead remembering how we found it5.
Many people that I speak to are, in fact, terrified by AI. This is fueled by the feeling that they do not fully understand what the impact will be, and the accompanying sense of powerlessness to mitigate the consequences. Of course, if you felt like your world was going to be turned upside down and you didn’t know what to do about it, you would be scared too.
Carl Sagan said that “You have to know the past to understand the present.” I add, “You have to know the present to understand the future.”
Here is what we know today about how we adapt to generative AI: There is clear evidence of GenAI impacting our learning, reasoning, critical thinking, creativity, and metacognition (the awareness and regulation of your own thinking). GenAI is even bringing changes that go beyond traditional search engines when it comes to how we search for and verify knowledge6.
I don’t think any one of these changes is particularly surprising, but the sheer breadth and potential depth is certainly cause for pause.
GenerativeAI is not as smart or creative as you think
In addition to the specific effects below, there are two macro effects we should keep in mind. First, we have put AI on a pedestal of superiority. So much so that people trust AI reasoning over their own, even when the AI is incorrect and they are correct. This is likely in part because, by design, AI-generated explanations are compelling. This is innocuous until you are navigating dis- and misinformation; AI-generated explanations of misinformation can amplify belief in false news headlines and undermine truthful ones. Best case, you are less skeptical of information when it comes from AI, and therefore more vulnerable to falsehoods.
The second macro effect to keep in mind is that LLMs, such as ChatGPT and Claude are fundamentally convergent. Great leaps in innovation come from divergence from the prevailing thought, not the reinforcement of it. While many debate the role of AI in the creative process, its applications are limited by its convergent nature. AI today can be a tool for creative execution, but it lacks the divergence necessary to be capable of innovative creativity.
AI gives the illusion of superiority and creativity because it is very good at the specific tasks we’ve trained models for. Even if you’ve never used a dedicated AI answer engine, such as Perplexity, you have almost certainly seen an AI summary of your search results.
The almost unavoidable: AI-synthesized search results
Whether you still turn to Google or Bing or have turned your queries to Perplexity, it’s nearly impossible7 to avoid AI synthesis of search results.
In the past, whether cognizant of it or not, you were vetting your sources as you selected which to click and review to find your answer. AI summaries, by contrast, abstract that away. It’s one thing to know there’s a risk of an unreliable answer, the question is how can you tell?
That is if you ask yourself that question at all. You are more likely to trust the AI summary if it matches your pre-existing beliefs, regardless of factual accuracy. And because you are likely thinking you are looking at a balanced synthesis from many sources, your trust in this misperception is quite high. AI answers abstract away the common flags you have come to rely on to know when to dig deeper.
And dig deeper you should. There is a tendency across all of the major AI-answer platforms to rely extensively on just 1-3 of the cited sources8. You may see 7 sources cited, but the information from all of those sources is rarely included. So what looks like a well-balanced summary may in fact be heavily reliant on a single source. Furthermore, the single source leveraged may not be a very reliable one.
The combination of displaying unused sources and the high-conviction phrasing of these summaries effectively bypass many of our natural mechanisms for skepticism. You think you are getting a convenient quality answer, but in fact you’re getting a convenient answer of dubious quality.
The flaws described above are resolvable with further technology development, but there will remain another limitation. Remember my comments about the convergent nature of AI? These summaries by their very nature capture the majority perspective. But the majority perspective is not the only perspective, and it certainly isn’t always the most truthful. Traditional search engines display the dissonance of perspectives as you scroll a page of search results; cleanly written and compelling summaries will not give us the same benefit of perspective. In giving up the ability to see dissonance we are further reinforcing the echo chambers we experience on social media.
Losing visibility into dissonant views is one type of perspective loss. What happens when we lose perspective of our own depth of understanding?
Learning with AI
There’s a big gap between getting the right answer and knowing how to solve a problem; between writing a good essay and being a great writer. Much of the learning benefits currently attributed to AI are of this nature, improvements in short-term performance but lacking retention of the skills purportedly developed.
Some amount of this can likely be attributed to the google effect, where memory is formed around where to find information instead of the information itself. Similarly, when it comes to writing, knowing that the tool will be available in the future for feedback is more likely to encourage reliance on it rather than promote genuine learning from the feedback. We have some data about this when it comes to programming education.
Programming has long since been the poster child of the potential of online learning. There are attributes of programming that likely have facilitated this, including the fact that learning theory is intimately linked to knowledge execution. In contrast, someone can talk about how to do biology experiments at an expert level without having any actual ability to perform the experiment.
For decades it’s been accepted that one can learn to program proficiently using only online tools. Interestingly the use of AI tools appears to hinder this learning, especially among novice learners. For example, a publication comparing four coding approaches for novices learning how to code Python by performing a series of code-authoring tasks found that “The AI Single Prompt approach resulted in the highest correctness scores on code-authoring tasks, but the lowest correctness scores on subsequent code-modification tasks during training.”9
Perhaps a mechanism focused on feedback instead of code generation has better results? It’s hard to say. On the one hand, it appears that learning with AI-generated feedback results in a reduced ability to successfully solve assignments in the absence of the AI feedback10. On the other hand, with perseverance (7 attempts worth) students were able to achieve the same success rate as those who did not learn with AI-generated feedback. It seems that learning how to code with AI makes you adept at coding with AI, but not necessarily coding in general.
If students are going to walk away from AI with uneven knowledge gains, it will be critical for them to have a clear understanding of the boundaries of their knowledge. This is important in all types of learning, but typically performance is aligned with the capability of task completion. Since AI enables task completion that learners are not independently capable of, we have to ask whether students are able to distinguish between what they know and what they can do when enabled by AI.
Thinking about thinking…very meta(cognition)
Yesterday, I booked Ubers for a friend to take my son to his swimming lesson and messaged her to confirm. Then, I briefly recalled a funny photo she had sent from the grocery store. A grocery store she had in fact driven to. Wait… why did I think she needed an Uber? She could just drive him herself.
That moment of realization was metacognition in action, the tool we use to self-regulate our critical thinking, pushing ourselves to go deeper or at times deciding that the problem doesn’t warrant additional effort. Writing is notorious for helping refine ones thinking, as it can be a highly observable form of metacognition.
GenAI is shifting both our critical thinking and our metacognition. Because both are simultaneously impacted, it’s even harder for us to be aware of these changes. When we use GenAI, we are offloading cognitive work. Just like your muscles get weaker when you stop using them, your critical thinking skills also get weaker when you stop using them. And the higher your confidence in GenAI, the lower your critical thinking11.
For adult professionals, this is a challenge, because critical thinking is hard (and often frustrating) to rebuild. The majority of critical thinking in professionals shifts to evaluating the GenAI output and verifying sufficient quality. But this is a distinctly different set of skills. By analogy, while a book editor is very good at reviewing the quality of a written manuscript they are not necessarily great writers themselves. When you shift your critical thinking to be the reviewer of GenAI work, you are simultaneously weakening your ability to do the work yourself.
For novice learners, this can be catastrophic, as they lack the expertise and experience to be effective at even the second stage of critical review. Without effective metacognition, learners lack the ability to benchmark the performance of the GenAI as well as their own ability. In effect, not only do they not know all of the skill elements, they are also unaware of what they don’t know12.
Following the proven framework of calculators
Each year that students pass through classrooms without being taught the critical skills to use AI is another year of students paying the price through the loss of their own cognitive capacity and development. The purpose of education is to enable the acquisition and application of knowledge; to ignore the impact of GenAI undermines the entire enterprise.
Although GenAI is still in its infancy, the risks are manifesting in observable data. By definition, it will take decades to acquire longitudinal data on the impact of GenAI on long-term cognition and dementia. But let’s be honest with ourselves, we know some of the consequences we’re facing: reduced cognitive utilization results in long-term cognitive decline, which significantly increases the risk of dementia.
This is not conjecture, this is connecting the dots.
So why is the conversation still stuck on keeping AI out of the classroom?
Perhaps we need to remember that we’ve approached this problem before, at least in part. This is not the first time we’ve developed a technology so adept that it could undermine foundational learning. Calculators, if misused, represent an existential risk to learning the foundations of mathematics. Indeed, it’s widely considered inappropriate to teach early mathematics with a calculator, precisely because it prevents foundational learning.
But we did not overreact by banning calculators from all classrooms. We developed an education framework that ensures students develop foundational skills, and then we teach them how to use calculators as a tool to supercharge their efforts.
The calculator framework is an example of a broader dynamic: proper instruction is the best path to prevent dependence. This pattern appears across domains, from calculators to sex education and now AI tools. In every case, avoiding comprehensive education leads to the very outcomes we fear: over-reliance, poor decision-making, or misuse. The evidence consistently shows that teaching foundational knowledge, critical thinking, and metacognition builds independence by equipping students to make informed decisions about when and how to appropriately use the tools available to them.
Effective implementation starts with educators
Look, I know there is a big difference between calculators and GenAI. But you know what I think the most impactful difference is? Experience. Educators are confident in their ability to use and teach how to use calculators, and I don’t think the same is true of AI. How can you teach something that you yourself have not been taught or trained in?
This is where we’ve fallen behind the curve, as there is a glaring lack of training materials to address this. Yes, UNESCO has published educator and student frameworks in an attempt to provide guidance13. But as someone with experience delivering professional development workshops for automation to educators, I would kindly describe UNESCO’s frameworks as well-intentioned but lacking.
To effectively inform and educate you have to meet people where they are at. You have to present the discussion framed through their lens. The challenge ahead is not just about developing the right tools and policies, but about equipping educators with the confidence and expertise to teach AI use responsibly. This means investing in professional development, creating robust curricula, and fostering a culture of critical inquiry rather than passive acceptance.
AI isn’t going away, and neither is our responsibility to ensure students develop the cognitive resilience to use it wisely.
You can read my take on the google effect here: This is Your Brain on Technology
Marie Curie is one of only 4 people to win two Nobel Prizes, one in physics and one in Chemistry. Her work on radiation changed the world but likely led to her early death at 66 years old from aplastic anemia.
Although the paucity of historical data on this point precludes a quantitative comparison, it is widely accepted that tractors have generally decreased injuries among farmers. This is true even in light of tractor-specific injuries.
An inspired workaround to prevent an AI summary from showing up in your google results is to use the word “fuck” in your query.