The AI Noise Machine

The AI Noise Machine

Written by: Theunis Duminy

Date: 2026-03-09

AITechnologyEthics


I have been fixated on AI's evolution since reading Tim Urban's essay1 on the subject in 2015. Since then, I have followed it with some mix of obsession, curiosity, and dread, trying to understand not just what it can do, but what kind of world it leads to.

When I co-founded an AI consultancy, I spent a month wondering whether I should. Was I helping accelerate a technology whose second- and third-order effects most people were not seriously reckoning with? I decided that, if this was happening anyway, I would rather help organisations adopt it carefully than leave them to the hype merchants. The discomfort never fully left.

So I am not writing this as an AI sceptic, or as someone who thinks the moment is manufactured. I am writing as someone who thinks this technology deserves serious attention, and who has watched the discourse around it become steadily less serious by the month.

The argument over whether AI is “overhyped” misses the point. The technology is already changing how software gets written, how organisations make decisions, and how people imagine the future of their own work. What is broken is the discourse around it.

I am not worried that too many people are paying attention to AI. I am worried that the people profiting from inflating it are making reality harder to see clearly.

The client who has already been lied to

A surprising amount of my job running an AI consultancy begins before any implementation does. It begins with deprogramming.

An executive sits across from me. Someone smart, very excited, and already carrying a set of beliefs about what AI will do for their business. Those beliefs rarely come from direct experience. They come from conference stages, vendor decks, viral LinkedIn posts, and people who have become AI experts in a few weeks and now sell certainty for a living.

By this point I usually know where the conversation is going. The executive wants an AI employee. An autonomous system. A content machine that produces endlessly and cheaply. A model trained on proprietary data that will unlock some durable competitive advantage. Often it is borrowed language attached to a fear of being left behind or a mandate from equally uninformed investors.

My job, before anything useful can happen, is to explain—carefully, and without condescension—that much of what they have been told is either technically false, wildly premature, or irrelevant to the actual constraints of their business.

The noise therefore doesn't just waste time in meetings. It distorts the client relationship before the work even starts. It sets expectations reality cannot meet, then makes honest, bounded work look like a failure of imagination. The sensationalist wins the pitch. The pragmatist inherits the cleanup.

A company is told it can replace judgement with automation, compress complexity into a chatbot, or fine-tune its way into defensibility. Budget follows before anyone asks what data exists, what process is changing, or who will own the system once it is live. The conversation is already downstream of a bad premise.

AI hype inflates expectations and changes the order in which organisations think. Instead of starting with the problem, they start with the mythology.

That is how the noise machine reproduces itself. It creates bad expectations, produces disappointing outcomes, and turns the disappointment into evidence that AI was never useful in the first place.

The mythology machine

That executive did not invent those expectations. They are the downstream product of a mythology machine.

A few weeks ago, a developer named Peter Steinberger released an open-source AI agent framework that went viral almost overnight. It became one of the fastest-growing repositories in GitHub history2. Sam Altman called him a genius. Within days, he had joined OpenAI3.

The project let a local AI agent run on a user's machine and automate personal tasks through tools like messaging apps. But the creators were clear about its limits. One maintainer warned in the documentation that if you could not use the command line, the project was too dangerous for you to run safely.

That caution did not travel far. You see, caveats aren't very viral.

Within days, the framework was being described online as an autonomous AI employee, evidence that AGI was near, or a system every enterprise needed to deploy immediately. None of those claims were true. A useful piece of engineering had been pushed through the social machinery and turned from a tool into a myth.

This is how the noise machine operates. A careful thinker publishes something bounded and useful. The opportunist swarm strips out the qualifications, amplifies the most dramatic interpretation, and feeds it into the cycle. Investors ask questions. CEOs summon their teams. The original work circulates in a form its creator barely recognises.

The field guide

If you spend enough time around AI, certain phrases begin to function less as descriptions than as red flags. They sound sophisticated, but often conceal a gap between language and reality. That gap is where bad decisions get made.

What follows is a glossary of the noise. The phrases, framings, and assumptions that have become load-bearing walls of the hype cycle. Each one sounds reasonable in isolation. Together, they form the vocabulary of a mythology that is making it harder for organisations, builders, and the public to think clearly about what AI actually is and is not.

AI-native

When automation becomes the starting assumption

This is the phrase I distrust most. In principle, it should mean using AI where it creates real leverage. In practice, it often means starting from the assumption that more AI, in more places, is inherently better. The incentive shifts from solving real problems to shoehorning AI into every process.

AI replies summarised by AI to write new AI drafts, which are then summarised by AI for the next reply. Every layer of thinking is mediated by a system processing the output of another system. Some call it efficiency, I call it cognitive decline. People lose the vocabulary that comes from struggling to solve a problem before trying to automate it. That matters, because you cannot meaningfully direct a tool if you do not understand the work it is supposed to assist.

There is value in struggling through the thing yourself, even if it is slower, even if it is less efficient, because that is how judgement forms. You learn what is hard, what is ambiguous, where the edge cases live, and what “good” even looks like. Without that, the institution does not become AI-native. It becomes stupid.

AI employee

The metaphor doing too much work

This is one of the most misleading phrases in circulation because it smuggles in a false image of what an employee is. I have had people say, in complete seriousness, that they want to build an “AI employee for sales”. Sales, or any other function, is not just a few deterministic tasks.

It is a messy bundle of judgements, interpersonal signals, objections, timing decisions, follow-ups, and context switches. You can automate pieces of that. You can build systems that improve lead qualification, outreach, and call summaries. But you cannot build what makes a good employee good: exercising judgement when the script breaks.

We’re training our own models

Borrowed technical prestige

A great deal of AI discourse now relies on terms that sound precise to non-technical people and are used carelessly enough to inflate the sophistication of what is actually being built. For example, we hear RAG, fine-tuning, model training, systems that “learn from experience”. The gulf between how these terms sound and what they usually mean is enormous.

A handful of companies are training frontier models in the meaningful sense. Almost everyone else is prompting, wrapping, grounding with retrieval, orchestrating workflows, or in some cases doing limited fine-tuning at real cost. That's fine! Some of that work can be useful. The problem is pretending it is something bigger. “We’re training our own models” is often a prestige claim masquerading as a technical one.

AI generated content

The texture of slop

The problem with fully AI-generated content is not simply that some of it is bad. It is that more and more of it is good enough in exactly the same way. People feel it before they can name it: the same cadence, the same polished tone, the same bloodless clarity, the same frictionless confidence. It reads cleanly and leaves nothing behind.

This is a cultural problem, not just a quality one. When people say AI means anyone can now be a writer or artist, they collapse craft into output. But art was never just output. It comes from taste, constraint, risk, and a point of view that costs something to hold. Synthetic fluency can imitate the surface while eroding the presence that gives it force. As more of the web fills with flattened language and generated sameness, human specificity becomes more valuable, not less.

Same work with fewer people

The fantasy of frictionless efficiency

This is one of the most seductive claims in the whole AI economy because it sounds managerial, rational, even prudent. Do the same work with fewer people. Maintain output. Cut cost. Increase leverage. It assumes that labour is nothing more than present-tense throughput, and that is a dangerously thin view of how organisations actually function.

Junior roles are not just a source of cheap throughput. They are where people learn the shape of the work, develop judgement, and absorb institutional knowledge. Remove too much of that layer and the cost does not disappear. It returns elsewhere, as seniors are pulled into supervision, exception-handling, and quality control for systems that still cannot run alone.

Meanwhile the next cohort never gets the formative experience that would have made them capable later on. If your answer to greater productive capacity is simply that fewer people will be needed, what exactly is your theory of contribution, adulthood, and economic participation? What is the plan for your own children? Efficiency that cannibalises capability is not strategy. It is short-term extraction dressed up as progress.

AI is improving exponentially

Borrowed credibility from a trend worth questioning

This one sounds like rigorous thinking. It is often the opposite. Frontier labs appear to be operating under economics that are shaky at best. The traditional compute scaling laws that made the exponential story credible are showing clear diminishing returns. The energy demands implied by continued scaling are enormous, with no clear solution in sight.

None of this means the models stop improving. The current generation is capable in ways that matter. That is still far off from making "exponential improvement continuing indefinitely" a true claim. It is a rhetorical move, borrowed credibility from a trend that may already be flattening, deployed to shut down questions that deserve answers. Ask someone making this claim to explain the scaling hypothesis, the current wall in synthetic data, or what happens to inference costs at the next capability tier. Most cannot. The exponential is doing a lot of work.

What the noise machine actually costs

None of this is an argument that AI is overhyped as a category. Something real is happening. The mistake is thinking that because the transformation is real, the surrounding noise is harmless. The noise makes it harder to distinguish a real inflection point from a manufactured one, and that distinction becomes expensive.

You can already see it. I recently asked a developer at a large technology company whether they used AI coding tools. They did not. Their first exposure came during peak hype, when social media was declaring software engineering dead. If your introduction to a tool is this will replace you rather than this may help you, you approach it defensively. The gap between the fantasy and the reality hardened into scepticism. The noise creates the conditions for its own rejection.

The opposite failure is just as dangerous. Organisations that buy too deeply into being AI-native are making a larger bet than they realise. If too much junior work disappears into agents and automation, you don't simply gain efficiency. You interrupt the process by which people become capable. Judgement is built through repetition, mistakes, and gradual contact with reality. Deny a generation that formative experience and the short-term productivity gain may cost long-term institutional competence.

The broader danger is that the people most exhausted by AI hype may be the least able to recognise the real changes when they arrive. After enough inflated promises and theatrical demos, many will file the whole subject under noise, and meet the genuine shifts unequipped, caught between a mythology they rightly rejected and a reality they failed to prepare for.

That is what the noise machine actually costs. Forget wasted budgets, failed pilots, or another executive embarrassed by a slide deck. It costs attention, trust and the public capacity to distinguish between a sensationalist story and a slower moving generational change.

The noise machine is not going away. The incentives are too strong and the vocabulary too easy to counterfeit. I will still be sitting across from executives who arrive pre-loaded with mythology, and my job will still begin with deprogramming. That is not going to change.

What can change is who we listen to. My part is small: bring pragmatism to the conversation, refuse to inflate, do bounded work honestly. That is how I stop feeding it.

How will you?

AI is too consequential for the loudest voices to be the least honest.

Footnotes

  1. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

  2. https://fortune.com/2026/02/19/openclaw-who-is-peter-steinberger-openai-sam-altman-anthropic-moltbook/

  3. https://www.cnbc.com/2026/02/15/openclaw-creator-peter-steinberger-joining-openai-altman-says.html


Thanks to Mignon Duminy for reading drafts of this.

Back to all notes