
After writing more than 150 articles about artificial intelligence in roughly 18 months, I have started hearing the same question again and again.
“Are you against AI?” – I understand why people ask.
A lot of my work deals with AI failures, chatbot hallucinations, synthetic intimacy, copyright battles, governance gaps, vendor overreach, executive confusion, agentic hype, fake productivity, legal risk, and all the strange things that happen when powerful technology enters the world faster than people know how to use it responsibly.
That can look negative from a distance. But the answer is no. I am not against AI.
I use AI. I study AI. I write about AI because I believe it is one of the most important technological shifts of our time. I think it will change how companies operate, how people work, how knowledge is produced, how software is built, how decisions are made, and how institutions compete.
That is exactly why I take it seriously. And taking AI seriously means not treating every shiny claim around it as truth.
Somewhere along the way, the AI conversation became oddly fragile. If you question whether a system is reliable, you are called skeptical. If you question whether a use case is appropriate, you are called cautious. If you question whether a company has governance in place, you are accused of slowing things down. If you question whether an “AI-powered” product is actually meaningful, someone will tell you that you do not understand innovation.
But criticism is not opposition. Criticism is what serious technology needs when it starts shaping serious decisions.
I would not write about AI failures if I thought AI did not matter. I write about them because it does matter. The failures are signals. They show us where the technology is being oversold, misunderstood, misapplied, or placed into systems that are not ready for it.
That is not anti-AI. That is respect for the scale of the change.
I am against vendors who put the AI sticker on everything and expect the market to confuse that sticker with strategy.
I am against executives who want AI because it sounds good, not because they understand what problem they are solving.
I am against the idea that every company must suddenly use AI agents or face extinction.
I am against the people who sell “perfect prompts” as if the future of work depends on magic sentences.
I am against the casual dismissal of AI governance and AI risk by people who will not be around when the consequences arrive.
I am against the belief that speed alone is a strategy.
I am against the idea that automation is automatically improvement.
I am against the fantasy that a confident machine is the same thing as a correct system.
I am against the culture of forced excitement that treats every question as disloyalty and every concern as fear.
There are many things around AI that deserve criticism. AI itself is not one of them.
The problem with hype is not that it is loud. The problem is that it lowers the quality of decision-making. Hype makes companies buy before they understand.
It makes leaders announce before they test. It makes vendors promise before they prove. It makes employees experiment without knowing what data they are exposing. It makes governance look like a delay instead of the structure that allows adoption to last. It makes every conversation feel urgent and very few conversations feel precise.
That is bad for companies. It is bad for customers. It is bad for employees. It is bad for investors. It is bad for public trust. And ultimately, it is bad for AI.
When immature systems are oversold, people lose trust. When vendors exaggerate, buyers become defensive. When executives deploy tools without controls, failures become scandals. When governance is ignored, regulators eventually arrive with less patience and more force.
This is how promising technology gets damaged by the behavior around it.
I do not believe that being pro-AI means applauding everything that calls itself AI.
I do not believe that being future-facing means switching off judgment.
I do not believe that serious people should confuse adoption with competence.
The strongest position on AI is not blind belief. It is disciplined engagement.
Use AI where it creates value.
Test it where it claims reliability.
Govern it where it creates exposure.
Limit it where the risk is too high.
Challenge vendors when claims sound too convenient.
Train people before asking them to depend on systems they do not understand.
Keep humans in the loop where judgment, accountability, and context still matter.
Document what the system does.
Measure what it changes.
Be honest about what it cannot do.
That is not anti-AI. That is how AI becomes useful.
It would be easier to write only about the exciting side. The productivity gains. The creative acceleration. The new tools. The business opportunities. The strategic upside. The transformation stories. The future-of-work language that always sounds good in a conference room.
Some of that is real. But the uncomfortable side is real, too.
The hallucinations are real. The copyright fights are real. The governance gaps are real. The emotional manipulation risks are real. The data exposure issues are real. The agent failures are real. The executive misunderstandings are real. The vendor incentives are real. The public confusion is real.
Ignoring those issues does not make someone optimistic. It makes them careless.
I do not want a careless AI future. I want a useful one. A productive one. A defensible one. One where the technology improves work instead of simply accelerating mistakes. One where companies adopt AI because they understand it, not because they fear being seen as behind. One where governance is not treated as the enemy of innovation, but as the condition that allows innovation to scale responsibly.
So no, I am not against AI. I am against the nonsense around AI.
I am against the shortcuts, the slogans, the fake certainty, the performative urgency, the bad incentives, the magical thinking, and the people who treat serious questions as personal attacks on the future.
AI is too important for that.
It deserves scrutiny. It deserves discipline. It deserves better leadership. It deserves better governance. It deserves more precise language than “AI-powered” and more serious thinking than “use agents or die.”
My work has never been about rejecting AI. It has been about separating the technology from the theater around it.
Because the future will not be built by people who clap the loudest. It will be built by people who know what they are doing.