← Back to Brain Food The New Literacy Is Provenance, and Almost Nobody Is Teaching It Success

The New Literacy Is Provenance, and Almost Nobody Is Teaching It

Last Updated: May 2, 2026 7 min read lab-reports

Provenance is the next must-have skill in the age of AI, but almost no one is teaching it. Beyond detection and prompts, it's about accountability: where did content come from, and how do you verify it? Are you ready for the shift? Read more.

I got caught flat-footed in a client meeting last year, and the question that caught me was not the one I was prepared for.

The client was not asking whether we had used AI. They assumed we had. The question was cleaner and harder.

Where did this come from, specifically, and how did you verify it?

I had a good answer for the "where did this come from" part. I had no answer at all for the "how did you verify it" part. Not because the work was bad. Because I had never thought of verification as a thing I needed to produce on demand. I had thought of it as a thing I did in my head.

That is the skill gap this piece is about, and it is bigger than most people realize.

The Literacy Everyone Is Talking About

Every district, every university, every HR department in 2026 has some version of an "AI literacy" initiative on the roadmap. Most of them are about using AI. How to prompt. How to critique the output. How to integrate the tool into a workflow. That is useful, and it is not the part that is going to matter in five years.

The part that is going to matter is the one almost nobody is teaching.

The Literacy Nobody Is Teaching

Provenance is the word for it, and it is going to be the single most important professional skill of the next decade. Where did this come from? Who made which choices? What did the machine contribute? What did the human verify? How do I know, and how do I prove it to someone who does not trust me yet?

Schools teach citation for human sources. The entire apparatus of academic integrity is built on the assumption that the interesting question is whether you copied from a book. The interesting question in 2026 is different. It is whether you can account for every stage of a piece of work, and explain what a machine did, what you did, and what you checked.

That skill is not being taught in most places. It is not in the rubric. It is not on the test. And it is the skill employers, regulators, and clients are already starting to require.

Why Detection Is the Wrong Fight

The easy thing to do (and, therefore, the thing most institutions are doing) is to try to catch students using AI. Detection tools. Plagiarism flags. Honor code updates. I understand the instinct. I also think it is the wrong fight, and the data is starting to make that clear.

Turnitin, the company that sells the most widely deployed AI detection tool in higher education, has publicly acknowledged that its tool produces false positives at a rate high enough to cause real harm to innocent students. Multiple universities have been sued over AI detection false positives in the last eighteen months.

Detection is a race you lose slowly and expensively. Provenance is a skill you teach once and reuse forever.

The EU AI Act, phased into force from 2024 through 2027, already requires disclosure labels on AI-generated content in several categories. The U.S. Copyright Office ruled in 2025 that works with substantial AI contributions must disclose the AI involvement to be registered. The direction of travel is not "catch the cheaters." It is "show your work." Every institution still fighting the detection war is training students for a world that is already being deprecated by law.

What Provenance Actually Looks Like

There is a trap in this conversation, and I want to name it.

If you try to track every token (this word came from me, this word came from the machine, this comma was a suggestion I accepted), you will either fail or produce something performative and useless. Real creative work is iterative and entangled. A prompt, a draft, an edit, a rewrite, a final pass. Nobody can track that at the keystroke level, and nobody should try.

What you can track, and what you should teach, is intent-level provenance.

Which stages of this work did I use AI for? Which stages did I keep fully human? What did I verify, against what source, and how? What would I be comfortable saying on the record about the role the machine played?

That is a short list. It is teachable. It fits on a half-page template. It answers every question a regulator, a client, a professor, or a future employer is going to ask. And almost nobody is teaching it.

Same Story In Agencies

The marketing agency world is the canary for this, and the canary is already sick.

Clients in 2025 and 2026 have started asking, in writing, "what percentage of this deliverable was AI-assisted and how was it verified?" The agencies with a clean answer keep the account. The agencies that hedge lose it. The agencies caught misrepresenting their AI use lose the client and take a public reputation hit that hangs around for months.

I know of one agency that lost a Fortune 500 retainer over a six-minute Zoom conversation they were not prepared for. The client was not hostile. The client was just asking a question, the agency did not have an answer to. That absence, in 2026, is the whole tell. If a professional shop cannot produce an intent-level provenance statement in under an hour, they are not yet operating in the world they already live in.

The agencies that figured this out first started shipping what they call "provenance packs" alongside every deliverable. A one-page companion document listing which assets were AI-assisted, what model family produced them, what human review was performed, and what verification steps were taken. The cost of producing one is low. The trust payoff is immediate. The agencies that have built this into their standard operating procedure are winning pitches the incumbents do not even realize they are losing.

The Honest Part

Provenance is harder than it sounds, and I want to be honest about that, because I do not want this piece to read like a neat solution.

The entangled nature of real creative work with AI makes perfect provenance impossible. If you sit with a model for an hour, iterating, you are not going to remember which phrasings were yours and which were nudges you accepted without thinking. Trying to reconstruct that after the fact is guessing, and guessing dressed up as documentation is worse than saying "I do not know."

The honest version of the skill is humility plus process. Decide before the work starts which stages the machine is invited into. Document those stages lightly while you do the work. Verify the things that matter before shipping. And when someone asks, tell the truth in plain language about what you did and did not do.

That is it. It is not a product. It is a habit. And the habit is not being installed in most places where people are being trained to work.

What I Do Now

I keep a half-page provenance template in the front of every project file now. Five questions, filled in as I go, not reconstructed after.

What stages of this work am I inviting AI into?

What stages am I keeping fully human, and why?

What claims in this piece did I verify, and against what?

What did I not verify, and how confident am I in it?

If a client, a regulator, or a reader asks tomorrow what the machine contributed, what will I say?

Ten minutes per project. Solves the problem that caught me flat-footed in that client meeting, and then some. I offer it up because the skill itself is what matters, not the template.

If you are in a school that is still running detection tools instead of teaching provenance, your students are about to graduate into a world their training did not prepare them for. If you are running an agency or a consultancy and you cannot produce a provenance statement in under an hour, you are one cold email away from losing an account you do not realize is already shaky.

The good news is that this is a cheap problem to fix, and most of your competitors will not bother. Same thing that was true about SEO in 2008 and about mobile in 2012. The skill is obvious in hindsight, available right now, and nobody wants to be the first one to take it seriously.

If you want a copy of the five-question template I use, I send the occasional dispatch about exactly this kind of thing at bensaibrain.com.


Sources

Created with ❤️ by humans + AI assistance 🤖