I noticed it first in my own writing, before I noticed it in anyone else's.
I had been using AI to draft a piece for a client. The draft came back fast, it was structurally sound, and I edited it down to something I was willing to put my name on. I shipped it. I went to bed. The next morning I could not remember what I had argued. Not the gist. Not the throughline. I had to open the file to see what I thought.
That had never happened to me before. And it was not the AI's fault. It was the part of the work I had skipped that used to be how I built the memory in the first place.
The Old Bargain
The school system most of us went through was, underneath the rhetoric, a task-completion machine.
Turn in the assignment. Pass the test. Get the credit. Move on. The whole apparatus was built on a quiet assumption: that completing the task was a reliable proxy for having learned the thing. Most of the time, mostly, that worked. Not because the task was the learning, but because the struggle of doing the task was where the learning lived. You sat with the math problem until something clicked. You wrote the bad first draft and noticed where it broke. You rewrote it. You showed it to a teacher who marked it up. You did it again. The friction was the feature.
Decades of education reform have eroded that bargain in subtle ways, mostly in the name of efficiency. Standardized tests rewarded the answer, not the path. Rubrics rewarded the form, not the thinking. Grade inflation rewarded the submission, not the struggle. None of this was new in 2023. AI just turned the dial all the way up in eighteen months.
What AI Just Did To That Bargain
When the task can be completed in thirty seconds by a machine, every assumption baked into the old bargain breaks at once.
A 2025 study in the journal Societies surveyed 666 participants and found a significant negative correlation between frequent AI tool use and critical thinking scores. The mechanism was something the researchers called cognitive offloading: when the machine can do the thinking, you stop doing the thinking, and the muscle that does the thinking gets quieter. The effect was strongest in younger participants. The people who never had the muscle built in the first place.
MIT Media Lab ran an even sharper version of the same question. They put participants in three groups (LLM-assisted, search-engine-assisted, brain-only) and asked them to write essays while wearing EEG caps. The LLM group showed reduced neural connectivity during the task. They had a weaker memory of what they had written when asked about it later. They reported a lower sense of authorship over the finished essay.
They wrote the thing. They could not remember the thing. They did not feel like they had written the thing. All three were measurable.
That is what happened to me with the client piece. I had outsourced the part where the writing burns itself into memory. The task got done. The learning did not.
The Junior Analyst Problem
Outside of school, the same dynamic is playing out at every firm that ever ran an apprenticeship dressed up as a job.
The two-year analyst program at the big consulting firms was, for decades, the closest thing white-collar work had to an old-style trade. You did the slide deck. You did it badly. Your manager redlined it in green pen. You did it again. By month 18, you had built the muscle of structuring an argument under time pressure, and that muscle was the actual product the firm was selling clients.
The "Cyborgs and Centaurs" paper out of Harvard and BCG measured what AI does to that workflow. Average performance on assigned tasks went up about 40%. The slides got better. The hours got easier. Everybody on the call was happier.
The quieter finding, the one that did not make the headlines: the junior reps that used to build the judgment (the synthesis, the structuring, the bad first draft) are exactly the work AI is now doing. The ladder still has rungs at the top. The bottom rungs are getting sawed off, one cohort at a time.
I talked to a senior partner at one of those firms a few weeks ago. He said something I have not been able to stop thinking about.
We are running out of senior partners. I think I know why. We stopped building them five years ago, and we just noticed.
The Honest Part
The "AI is making us dumber" framing is too easy and partly wrong. Calculators did not make people worse at math. They made people worse at the part of math that was drudgery and freed up cycles for the part that was not. Same thing happened with spell check, with search engines, with autocomplete, with every tool the previous generation of teachers warned was going to ruin the next generation of students.
The honest version of the worry is more specific.
We are offloading the parts of cognitive work where the struggle itself was the learning, and we have not yet figured out how to replace that struggle with a different kind of effortful practice. The danger is not that AI exists. The danger is that the curriculum (and the analyst onboarding program, and the editorial process at every shop that ships writing) has not been redesigned around the new reality. So the struggle is just gone. With nothing in its place.
That is a fixable problem. It is not being fixed at the speed it is being created.
What the Firms That Get It Are Doing
A handful of consulting firms have started building deliberate "no AI" exercises into junior development. Same logic that med schools still make residents draw anatomy by hand, even though every textbook has photographs. The point is not the drawing. The point is what the drawing does to your eye.
A handful of universities (the ones doing the work, not the ones writing the press releases) have started rebuilding assignments around process artifacts instead of finished outputs. Show me your thinking. Show me three drafts. Show me where you got stuck and how you got unstuck. The grade is not on the polished thing. The grade is on the friction.
I have been adding the same kind of practice back into my own work, and I want to be honest about what that looks like, because it is harder than I expected.
For the pieces I most need to remember the next morning, I write the bad first draft myself. By hand sometimes. AI gets brought in for the second pass, the structural critique, the fact-check, and the polish. The order matters. If the machine touches the page first, my brain treats the rest of the work as editing instead of thinking, and the writing does not stick.
That is the only thing that has worked for me. I am not saying it is the answer. I am saying I needed something to put back in the place where the friction used to live, and this is the thing I tried that I have not regretted.
The Question I Want You to Sit With
If your job, your firm, or your school is currently optimizing for task completion, the next eighteen months are going to be embarrassing.
If you are optimizing for learning (yours, your team's, your students'), the same eighteen months are going to be one of the best windows in a long time, because the noise is loud enough that almost nobody else is going to bother.
The question is not whether AI is good or bad for learning. The question is whether the people in charge of your learning environment know the difference between completing a task and learning a thing, and whether they have done anything about it in the last six months.
Most of them have not. The ones that have are about to look very different in three years.
If you want to think out loud about what this looks like in your own work or your own classroom, I write about it most weeks at bensaibrain.com
Sources
Kosmyna et al., "Your Brain on ChatGPT," MIT Media Lab, 2025
Dell'Acqua et al., "Navigating the Jagged Technological Frontier," HBS WP 24-013, 2023
Created with ❤️ by humans + AI assistance 🤖