The AI Noise Machine

The AI Noise Machine

Written by: Theunis Duminy

Date: 2026-03-09

AITechnologyEthics


I have been fixated on AI's evolution since reading Tim Urban's essay1 on the subject in 2015. Since then, I have followed it with some mix of obsession, curiosity, and trepidation, trying to understand not just what it can do, but what kind of world it leads to.

When I co-founded an AI consultancy, I spent a month wondering whether I should. Was I helping accelerate a technology whose second- and third-order effects we're not seriously reckoning with? I decided that if this was happening anyway, I would rather help organisations adopt it carefully than leave them to the hype merchants. The discomfort never fully left.

So I am not writing this as an AI sceptic, or as someone who thinks the moment is manufactured. I am writing as someone who thinks this technology deserves serious attention, and who has watched the discourse around it become steadily more theatrical.

The argument over whether AI is "overhyped" misses the point. The technology is already changing how organisations make decisions and how people imagine the future of their own work.

The attention AI receives is warranted. What bothers me is that the discourse is being dominated by people who profit from inflating it and obscuring reality.

The client who has already been lied to

A surprising amount of my job running an AI consultancy begins before any implementation does. It begins with deprogramming.

An executive sits across from me. They are already carrying a set of beliefs about what AI will do for their business. Those beliefs rarely come from direct experience. They come from conference stages, vendor decks, viral social media posts, and from AI experts (according to their recently updated LinkedIn bios) who now sell silver bullets for a living.

By this point I know where the conversation is going. The executive wants an AI employee. An autonomous system. A content machine that produces endlessly and cheaply. A model trained on proprietary data that will unlock some durable competitive advantage. Often it is borrowed language attached to a fear of being left behind or a mandate from equally uninformed investors.

My job before anything useful can happen, is to explain without condescension that much of what they have been told is either technically false, wildly premature, or irrelevant to the actual constraints of their business.

The noise does more than waste time in meetings. It distorts the client relationship before the work even starts. It sets expectations reality cannot meet, then makes honest, bounded work look like a failure of imagination. The sensationalist wins the pitch. The pragmatist inherits the cleanup.

Companies are told that automation can replace judgement and that a chatbot can handle human complexity. The budget appears before anyone asks useful questions. What data exists? What process is meant to change? Who will own the system once it is live? The whole conversation starts in the wrong place.

AI hype inflates expectations and changes the order in which organisations think. Instead of starting with the problem, they start with the mythology.

That is how the noise machine alienates people. It creates bad expectations, produces disappointing outcomes, and turns those failures into evidence that AI was never useful in the first place.

The mythology machine

That executive did not invent those expectations. Understanding where they came from requires watching the machine operate in real time.

In February 2026, a developer named Peter Steinberger released an open-source AI agent framework that went viral almost overnight. It became one of the fastest-growing repositories in GitHub history2. Sam Altman called him a genius. Within days, he had joined OpenAI3.

The project let a local AI agent run on a user's machine and automate personal tasks through tools like messaging apps. The creators were explicit about its limits. One maintainer warned that if you could not use the command line, the project was too dangerous for you to run safely.

Those caveats did not survive into the articles and breathless threads declaring that autonomous AI had arrived. You see, caveats are not very viral.

Within a few hours, the framework was being described online as an autonomous AI employee, evidence that AGI was near, or a system every enterprise needed to deploy immediately. These claims were exaggerations at best. At worst, intentionally deceptive. A useful, bounded piece of engineering had been fed through the machinery and turned into myth.

This is how the noise machine operates. A careful thinker publishes something bounded and useful. The opportunist swarm strips out the qualifications, amplifies the most dramatic interpretation, and feeds it into the cycle. Investors ask questions. CEOs summon their teams. The original work circulates in a form its creator barely recognises.

The field guide

If you spend enough time around AI, certain phrases begin to function less as descriptions than as red flags. They sound sophisticated, but often conceal a gap between language and reality. That gap is where bad decisions get made.

What follows is a glossary of the noise. The phrases, framings, and assumptions that have become load-bearing walls of the hype cycle. Each one sounds reasonable in isolation. Together, they form the vocabulary of a mythology that is making it harder for organisations, builders, and the public to think clearly about what AI actually is and is not.

AI-native

When automation becomes the starting assumption

This is the phrase I distrust most. In principle, it should mean using AI where it creates real leverage. In practice, it often means starting from the assumption that more AI, in more places, is inherently better. The incentive shifts from solving real problems to shoehorning AI into every process.

You find AI replies summarised by AI to write new AI drafts, which are then summarised by AI for the next reply. Every layer of thinking is mediated by a system processing the output of another system. Some call it efficiency, I call it cognitive decline. People lose the vocabulary that comes from struggling to solve a problem before trying to automate it. That matters, because you cannot meaningfully direct a tool if you do not understand the work it is supposed to assist.

There is value in struggling through the thing yourself, even if it is slower, even if it is less efficient, because that is how judgement forms. You learn what is hard, what is ambiguous, where the edge cases live, and what “good” even looks like. Without that, the institution does not become AI-native. It becomes stupid.

AI employee

The metaphor doing too much work

This is one of the most misleading phrases in circulation because it smuggles in a false image of what an employee is. I have had people say, in complete seriousness, that they want to build an "AI employee for sales". What does that mean?

Sales, or any other role, is a messy bundle of judgements, interpersonal signals, objections, timing decisions, follow-ups, and context switches. You can automate pieces of that. You can, as an example, build systems that improve lead qualification, outreach, and call summaries. But you cannot build what makes a good employee good: exercising judgement when the script breaks.

We’re training our own models

Borrowed technical prestige

A great deal of AI discourse now relies on terms that sound precise to non-technical people and are used carelessly enough to inflate the sophistication of what is actually being built. For example, we hear talk of RAG, fine-tuning, model training, and systems that "learn from experience". The gulf between how these terms sound and what they usually mean is enormous.

A handful of companies are training frontier models in the meaningful sense. Almost everyone else is prompting, wrapping, grounding with retrieval, orchestrating workflows, or in some cases doing limited fine-tuning at real cost. That's fine, some of that work can be useful. The problem is pretending it is something bigger.

AI generated content

The texture of slop

The problem with fully AI-generated content is deeper than quality. You begin to notice that it is good enough in exactly the same way. People feel it too, even if they cannot name it. They see the same cadence, the same polished tone, the same bloodless clarity, the same frictionless confidence.

When people say AI means anyone can now be a writer or artist, they collapse craft into output. Art cannot be reduced to output. It comes from taste, constraint, risk, and a point of view that costs something to hold. Synthetic fluency can imitate the surface while eroding the presence that gives it force. As more of the web fills with flattened language and generated sameness, human specificity becomes more valuable, not less.

Same work with fewer people

The fantasy of frictionless efficiency

This is one of the most seductive claims in the AI economy because it sounds rational, even prudent. Do the same work with fewer people. Maintain output and cut costs. It assumes that labour is nothing more than present-tense throughput, and that is a dangerously thin view of how organisations actually function.

Junior roles are where people learn the shape of the work, develop judgement, and absorb institutional knowledge. Remove too much of that layer and the cost does not disappear. It returns elsewhere, as seniors are pulled into quality control for systems that still cannot run alone.

Meanwhile the next cohort never gets the formative experience that would have made them capable later on. If your answer to greater productive capacity is simply that fewer people will be needed, what exactly is your theory of economic participation? Efficiency that cannibalises capability is not strategy. It is short-term extraction dressed up as progress.

AI is improving exponentially

Borrowed credibility from a trend worth questioning

This one sounds like rigorous thinking. It is often the opposite. Frontier labs appear to be operating under economics that are shaky at best. The traditional compute scaling laws that made the exponential story credible are showing clear diminishing returns. The energy demands implied by continued scaling are enormous, with no clear solution in sight.

None of this means the models stop improving. The current generation is capable in ways that matter. That is still far from making indefinite exponential improvement a valid claim. It is a rhetorical move, borrowed credibility from a trend that may already be flattening, deployed to shut down questions that deserve answers. Ask someone making this claim to explain the scaling hypothesis, the current wall in synthetic data, or what happens to inference costs at the next capability tier. Most cannot. The exponential is doing a lot of work.

What the noise machine actually costs

Right now, executives and organisations are commissioning projects built on these premises. They are not stupid. They are operating on the most abundant information the environment has to offer, and that environment has been systematically poisoned.

I recently asked a developer at a large technology company whether they used AI coding tools. They did not. Their first exposure came during peak hype, when social media was declaring software engineering dead. If your introduction to a tool is this will replace you rather than this may help you, you approach it defensively. The gap between the fantasy and the reality hardened into scepticism that no amount of genuine capability will easily undo.

The opposite failure is just as dangerous. Organisations that buy too deeply into being AI-native are making a larger bet than they realise. If too much junior work disappears into agents and automation, you interrupt the process by which people become capable. Judgement is built through repetition and mistakes and contact with reality. Deny a generation that formative experience and the short-term productivity gain may cost long-term institutional competence.

The broader danger is that the people most exhausted by AI hype may be the least able to recognise the real changes when they arrive. After enough inflated promises and theatrical demos, many will file the whole subject under noise, and meet the coming shifts unprepared, caught between a mythology they rightly rejected and a reality they failed to prepare for.

The incentives powering the noise machine are not going away. They are too strong, and the rewards for exaggeration too immediate. The hype will not correct itself. The task, then, is to reject its language, resist its incentives, and speak more plainly about what these systems can and cannot do.

AI is too consequential to be narrated by the loudest and least honest voices.

Footnotes

  1. https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

  2. https://fortune.com/2026/02/19/openclaw-who-is-peter-steinberger-openai-sam-altman-anthropic-moltbook/

  3. https://www.cnbc.com/2026/02/15/openclaw-creator-peter-steinberger-joining-openai-altman-says.html


Thanks to Mignon Duminy, Chris Immelman, James Rothmann for reading drafts of this.

Back to all notes