Skepticism is Your Superpower
Your instinct to pause and verify isn't a weakness. In 2026 it might be your greatest asset.
The instinct you've been apologizing for is actually your greatest asset.
Why the “I Remember Before the Internet” Generation Has the Edge
There is a generation of professionals who remember when email was new, when you had to verify information before trusting it, and when “the computer said so” was never enough.
That instinct, the pause, the second look, the healthy distrust of anything too convenient is not a weakness. It is a survival mechanism that the generation raised on algorithmic feeds never had to develop.
In 2026, that instinct is worth more than any certification.
The professionals most at risk right now are not the skeptics. They are the ones who adopted every new AI tool without asking what it could get wrong.
What AI Gets Wrong And Why It Sounds So Confident
AI hallucinations are not bugs. They are a structural feature of how large language models work. A large language model, or LLM, is the engine behind most AI tools you've heard of. It is software trained on massive amounts of text that learned to predict what words should come next. No understanding. No fact-checking. Just very sophisticated pattern recognition.
When a GenAI model generates a response, it is predicting the next most likely word based on its training data. It is not retrieving facts from a verified database. It is not checking sources. It is pattern-matching at scale.
The result: a model can produce a legal citation that does not exist, a statistic with no source, or a company name that is subtly wrong, all delivered with the same confident tone as a correct answer.
This is not a problem that will be fully solved. It is a known limitation of the architecture. The model does not know what it does not know.
The practical implication is any AI output that will influence a decision, be shared externally, or touch money or legal exposure needs a human verification step. No exceptions.
The Need-to-Know Rule
Understanding what AI gets wrong is only half the picture. The other half is understanding what AI can now do and the controls you need to have in place before it does it on your behalf.
Last week I introduced the concept of AI Agents, tools that don’t just answer questions but take actions on your behalf. Book meetings. Send emails. Move files. Execute code.
Here is the framework for thinking about every agent you deploy:
Skepticism is not resistance to progress. It is the operating standard that makes progress sustainable.
Safe Harbor: Three Things You Can Do This Week
- Pick one AI tool you use regularly. Ask your AI tool who made it and what it can't do. If it can't answer that clearly, you're already learning something important.
- Test a hallucination. Ask your AI tool to: "Provide a summary of the 2024 financial performance of [A fictional company name, e.g., 'Vandelay Cloud Systems']." or "Write a short biography of [Your Name]." You will likely see it attempt to "bridge" the gap between what it knows and what it thinks you want to hear. Proving that confidence and accuracy are not the same thing.
- Apply the Need-to-Know Rule to one agent. List every permission it has. Remove anything it doesn’t need for its specific job.
Next week: AI is taking all the jobs. Or is it? The panic is louder than the evidence and the history is more reassuring than the headlines.