I sit on an advisory board for a Midwest university. Last spring a dean described something that stopped the room cold. His office had just signed a six-figure contract for an enterprise AI tool to help write grant applications. Down the hall, in the same building, one of his faculty members was running papers through a detector and dragging students into honor-code hearings for using AI on a homework assignment.
Two floors. Two opposite policies. Same institution, same week.
That is not a technology problem. That is a maturity problem. And it has a map.
The Four Stages Nobody Wants to Name Out Loud
Every institution, every company, every team facing AI moves through the same four stages. I have watched it play out across education, home services, marketing agencies, and consulting firms. The words change. The arc does not.
Stage 1: Deny. "AI is not really a factor here." No policy, no training, no conversation. This was most of higher education in mid-2023. It was most of the trades industry in 2024. It is still a surprising number of small businesses today, quietly hoping the whole thing passes.
Stage 2: Police. Blanket bans. Detection tools. Honor-code prosecutions. Faculty become enforcers, not teachers. Leaders become compliance officers, not strategists. The energy of the institution shifts from building to catching. This is where most organizations got stuck, and where many still are.
Stage 3: Permit. Course-by-course discretion. Disclosure requirements. AI literacy training begins. The question shifts from "did they use it" to "how did they use it." By October 2025, top universities including Harvard, Oxford, and the University of Michigan had moved from blanket prohibitions to explicit-disclosure-with-instructor-defined-limits policies. Stanford's Academic Integrity Working Group addressed generative AI as an institutional question, not a faculty-by-faculty one.
Stage 4: Integrate. AI is part of the curriculum, the assessment design, the operations. The question is no longer "can they use it" but "what should they be able to do with it that they could not do before." This is where the leading edge is moving in 2026, and almost nobody is there yet.
The framework's value is not that it is clean. It is that it gives leaders a vocabulary to name where they actually are, not where their policy deck says they are.
Why Stage Two Is a Trap
Stage 2 feels productive. You are doing something. You are buying tools, writing policies, holding meetings. The problem is that everything you are doing is aimed backward, at catching a behavior that has already happened, rather than forward, at building a capacity your people do not yet have.
The research confirms this. Faculty training materials now explicitly caution against relying on a single AI-detection tool before initiating integrity proceedings. That is a published admission that Stage 2 is failing on its own terms. When the tools your policy rests on cannot reliably do what you are asking them to do, the policy is not a policy. It is a ritual.
Meanwhile, the research conversation has moved on entirely. The literature is shifting from "how to detect AI use" to "how to design assessments that AI cannot trivialize." That is not a tweak. That is a Stage 3 to Stage 4 transition happening in academic journals while most institutions are still arguing about detectors.
The institutions that stay at Stage 2 do not just fall behind. They bleed. They lose the faculty who want to teach, because those people do not want to be cops. They lose the students who learn best, because those students want tools, not prohibitions. Stage 2 is not cautious. It is expensive.
Two Stages at Once
Here is the honest part. The neat four-stage arc makes it sound linear. It is not.
Most institutions are in two stages at once. The administration announces integration while individual faculty are still policing. Or a forward-thinking department hits Stage 4 while the provost's office is stuck at Stage 2 writing enforcement memos nobody reads.
The framework does not solve this. It names it. And naming it is the first step, because you cannot close a gap you have not described. The risk is leaders using the framework to declare victory. "We are at Stage 4." No. Your policy might be at Stage 4. Your culture is whatever your newest hire experiences on their first day.
Same Arc, Different Industry
Every home services vertical has lived this exact journey with a different technology.
HVAC contractors went through it with smart thermostats between 2012 and 2020. Stage 1: "Homeowners want a knob, not an app." Stage 2: "We will not service Nest installs." Stage 3: "We will service them but we will not sell them." Stage 4: "We sell, install, and remotely monitor connected systems as a recurring revenue line." The contractors who got stuck at Stage 2 watched their customers walk to the installer who skipped ahead.
The security industry went through the most painful version. Traditional alarm dealers tried to police the DIY wave when Ring and SimpliSafe arrived. They refused to monitor third-party systems. They lost a generation of homeowners. The survivors are now at Stage 4, offering monitoring-plus-professional-install hybrid services. But the years they spent at Stage 2 cost them customers they will never get back.
Electricians refused to touch solar in 2010 through 2014, then watched solar-only contractors take the work. Plumbers are five years behind HVAC on leak-detection sensors and playing the same tape.
The pattern is identical every time. The institutions that get stuck at Police lose talent and customers to operators who skip to Integrate. The technology changes. The arc does not.
The Diagnostic That Actually Matters
The question is not "are we using AI." Almost everyone is using AI. The question is which stage you are actually operating at, measured not by your policy but by what your newest person experiences when they walk in the door.
If they experience suspicion, you are at Stage 2.
If they experience guidelines, you are at Stage 3.
If they experience AI as a normal part of how work gets done and how quality gets measured, you are approaching Stage 4.
That distinction is not academic. It determines whether the best people want to stay, whether your clients or students feel like they are being served by a modern institution, and whether the gap between you and your most progressive competitor is growing or shrinking.
Stage 4 is not a destination. It is a pace. The institutions that get there first will keep moving. The ones stuck at Stage 2 will still be arguing about detectors when the conversation has left the building.
What I Want You to Do With This
Name your stage out loud. Not the comfortable one. The real one.
If you are a dean, ask your newest faculty hire what AI policy they experienced on day one. Their answer is your stage, not yours.
If you are a business owner, ask your newest employee whether AI is something they hide or something they use openly. Their answer tells you more than your strategy deck.
I have watched organizations move from Stage 2 to Stage 4 in under a year when leadership named the gap honestly. I have also watched organizations spend three years at Stage 2, convinced they were being prudent, while every competitor around them integrated and moved on.
The map is the same. The speed depends on whether you are willing to name where you actually are.
If you are trying to figure out which stage your organization is actually in, not which one your policy says you are in, that is the diagnostic I run through bensaibrain.com. Come name it out loud.
Sources
Generative AI Policies at the World's Top Universities, Oct 2025 -- Thesify
Academic Integrity Working Group -- Stanford Report, Oct 2025
Examining Academic Integrity Policy in the Era of AI -- Frontiers in Education
Trajectories of AI Policy in Higher Education -- ScienceDirect
Created with ❤️ by humans + AI assistance 🤖