I grew up in a house where calculators were a moral question.
My parents both taught. My father was an art teacher, my mother taught English, and they both came up in the era when bringing a calculator into a math classroom was treated as a kind of cheating. By the time I was in middle school, those rules had loosened, but the residue was everywhere. Calculators were tolerated in some classrooms, banned in others, viewed as a crutch by certain teachers and as a tool by others. The same device produced two completely different verdicts depending on which room you walked into.
I remember being told, in very plain language, that using a calculator on a problem I could do on paper was a kind of cheating myself. Even when the calculator was sitting right there, even when every adult I knew used one without a second thought, the rule for kids was different. The instinct to restrict was that strong.
My wife and I are both Kansas State graduates. Go Wildcats. We have nine kids, ranging from six to twenty-two. One of them is in college right now, and the youngest ones are growing up in a world where they ask a voice assistant questions I would have had to look up in an encyclopedia. I watch them do their homework on devices I was forbidden to touch, and I catch myself, sometimes, about to say the same thing my teachers said to me. It is a strange thing, hearing your own teachers come out of your own mouth a generation later.
I was sitting in an advisory board meeting for a Midwest university last month, thinking about exactly this. A faculty member said, "AI is the biggest threat to learning since," and trailed off, searching for the right comparison. Someone in the back of the room said, "the calculator." The room laughed. Then the room got quiet, because nobody actually disagreed.
I have been turning that moment over in my mind for three weeks. Because I think the people in that room got something half right and half wrong, and the half they got wrong is the half that matters.
The Fight We Already Had
Here is the part of the story most people do not know.
In 1975, the National Advisory Committee on Mathematical Education recommended that students in eighth grade and above have access to calculators for all class work and exams. Not just for the hard problems. For everything. In 1980, the National Council of Teachers of Mathematics formalized that recommendation in its Agenda for Action and asked every school in the country to follow it. The official position of the people who actually train math teachers was, by 1980, that calculators belonged in the classroom.
That position took most of a decade to win.
The peak of the resistance came in 1986, when a teacher named John Saxon led a public protest at the NCTM annual meeting. His argument was that calculators would atrophy mental math, encourage dependence on the machine, and ruin the next generation of mathematical thinkers. He was not a fringe voice. He was widely respected, and a lot of people in that room agreed with him. Three years later, in 1989, NCTM published its Curriculum and Evaluation Standards and embedded calculators directly into the K through 12 curriculum as a normal classroom expectation. The fight was effectively over.
Fourteen years from the first recommendation to full integration. A whole generation of students caught in the middle, depending on which classroom they walked into and which side of the argument their teacher had landed on.
I want you to read the actual arguments from 1986 against calculators, and then read the actual arguments from 2024 against ChatGPT, side by side. They are not similar. They are the same arguments. Students will lose the ability to compute. Students will lose the ability to think and write. They will become dependent on the machine. They will outsource their judgment. They will not learn from their mistakes. They will not struggle, so they will not grow. It is cheating. It is cheating. Real professionals do not use them. Real professionals do not use them.
You can read the 1986 op-eds and the 2024 op-eds in either order and not notice you switched. Same template. Swap the noun.
Resistance to a useful tool always sounds the same. The arguments are not similar across generations. They are identical. That should tell you something about the arguments.
That is not an accident, and it is not a clever rhetorical trick. It is the pattern. And the pattern is the lesson.
What the Skeptics Got Right
Here is the part I do not want to write, because I have small kids and I feel the weight of it personally.
The calculator skeptics in 1986 were not entirely wrong. Mental math really did decline in the cohorts that came up with calculators in their pockets. Estimation skills really did atrophy in some students. The kids who never had to do long division by hand really do struggle with certain kinds of number sense their grandparents took for granted. The 1986 worry was not paranoid. It was partly true.
It also did not matter.
It did not matter because the tool was too useful, the productivity gain was too large, and the next generation of jobs assumed the tool's existence. The economy reorganized itself around calculators. The curriculum eventually reorganized itself around calculators. And the kids who learned to use them well leapt over the kids who refused, even when some of those refusers had genuinely better mental math.
I think this is the part of the calculator parallel that should make every AI skeptic stop, including the version of me that wakes up at 3 a.m. worried about what my own kids' generation is going to lose. You can be partly right about a tool and still on the wrong side of the curve. You can correctly identify the real cost of the new technology and still be the person history catches off guard. The people on the other side of that argument were not denying the cost. They were pricing it in and moving forward.
Where the Analogy Breaks
I have to be honest about something else, because I think the calculator parallel gets used lazily, including by people who agree with me.
A calculator does one thing. Arithmetic. AI does writing, judgment, reasoning, and creative work. Those are the things education has historically said are the learning, not the byproduct of it. So if you are a worried English professor and someone tells you AI is just the new calculator and you should relax, that is not reassuring. That is dismissive. The analogy is real, and it is also incomplete.
The honest version of the calculator parallel is this. The resistance pattern is the same, but the stakes are higher, which is exactly why the resistance will fail faster, and the consequences of being on the wrong side of it will be larger. The 1989 calculator capitulation took fourteen years from the first NACOME recommendation. The AI capitulation is going to take three. Maybe two. The institutions that got stuck arguing about calculators in 1986 had a generation to recover. The institutions getting stuck arguing about AI in 2026 will not.
Being partly right is not a strategy. It is a position. And positions get passed by the people who treated the same information as a starting point instead of a conclusion.
The Same Story in Every Industry
I run a marketing company that serves home services businesses. HVAC, plumbing, electrical, solar, security. Every single one of those trades has its own version of this exact story.
Talk to an old electrician about the first wave of digital multimeters in the 1980s, and you will hear an almost word-for-word version of the calculator argument. The analog needle taught you to feel the circuit. The digital reading made you lazy. Real techs did not trust the new thing. Every apprentice in the country is now trained on digital. Analog meters live in toolboxes as nostalgia.
Or talk to a plumbing or HVAC owner who refused to put a CRM on his trucks ten years ago because we know our customers from memory, and a paper route sheet works fine. The shops that held that line are now either sold, struggling to hire under-thirty technicians who refuse to work paper systems, or both.
The security industry has the most painful version of this. When DIY home security took off in the mid-2010s with Ring, SimpliSafe, and Nest, the traditional alarm dealers tried to police the wave instead of integrating with it. They refused to monitor third-party systems. They told customers that DIY was not real security. They lost a generation of homeowners, and most of their growth, and the survivors are now offering hybrid pro install and DIY monitoring packages they swore for years they would never touch.
The pattern is the same in every industry I have worked with. The skeptics are partly right. The skeptics still lose. And the institutions that can name the pattern they are inside of, while they are still inside of it, are the ones that move first. Moving first in a tool transition is almost the whole game.
What I Want You to Sit With
I am not asking you to be optimistic about AI. I am asking you to be honest about which fight you are in.
If you are an educator policing AI in your classroom right now, the way teachers policed calculators in 1981, I want you to sit with the John Saxon question. If you stand up at the meeting and protest, are you doing it because the costs you are worried about are real? Probably yes. Are you doing it because the protest is going to work? Probably no. And if the answer is no, what would integration look like for you, on your terms, while you can still set the terms?
If you are a leader in any other industry watching AI flood your category, the same question applies. Where are you protecting a real value? Where are you protecting a habit? Which of those two are you actually willing to defend in public?
I think about my parents, standing in front of classrooms for years, trying to help kids find themselves in a painting or a paragraph. I think about the kid I have in college right now, navigating a system that is making the rules up in real time alongside her. I think about my six-year-old, who is going to walk into a higher education system in twelve years that we are building the foundation for today. The questions I am asking in this article are not abstract for me. They are decisions I am making for my own family, in real time, with imperfect information, knowing I will get some of it wrong.
The skeptics in 1986 were partly right. They were also wrong about the only thing that mattered, which was what side of the curve to be on.
You do not have to love the tool. You have to be honest about the fight.
That is the work. And it has already started, whether your institution is leading it or waiting to follow.
Created with ❤️ by humans + AI assistance 🤖