Strategy February 18, 2026 5 min read

The Conversation We're Not Having About AI

TL;DR

AI is not ready to operate without human oversight, and automating away your workforce destroys the customers you need to survive.

  • AI systems hallucinate, reflect biases, and can be confidently wrong — deployment without human oversight has led to real harm
  • Automating away jobs across sectors erodes the customer base companies need to survive
  • The dominant narratives of blind optimism and paralyzing fear both miss the point
  • Organizations need a people-first conversation about what good work looks like and how AI can support it
Erik Ros
Erik Ros Founder, Devilsberg

The Pattern I Keep Seeing

There's a pattern inside organizations right now.

Leadership mandates an AI strategy. A working group forms. Consultants arrive with slide decks full of efficiency gains and transformation roadmaps. And somewhere in the middle of all this activity, the people who actually do the work go quiet.

Not because they don't have opinions. Because nobody asked.

I've spent over twenty years inside large organizations — utilities, banking, telecom. I've watched a lot of technology waves hit institutional culture. Most of them followed the same arc: the technology arrived faster than the conversation did. And the gap between the two is where trust goes to die.

AI is doing this at a speed we haven't seen before.

The Technology Is Not What You Think It Is

Let's be honest about where the technology actually is.

AI is impressive. It is also unreliable in ways that are not always visible until the damage is done. These systems hallucinate. They reflect biases baked into their training data. They can be confidently wrong. And when deployed without adequate human oversight — especially in sensitive contexts — the consequences can be severe.

This is not theoretical. OpenAI's systems have been implicated in cases where vulnerable people in crisis received responses that pushed them toward suicide rather than away from it. That is not a fringe edge case. That is what happens when you automate judgment without understanding its limits, and without keeping a human in the loop.

No efficiency gain is worth that on your brand. Or on your conscience.

The uncomfortable truth is that AI, right now, is not a replacement for human judgment. It is a tool that requires human judgment to function responsibly. Organizations that deploy it as if it were the former will eventually pay a price — reputationally, legally, or humanly.

You Can't Automate Your Way to Growth

There is a second problem that receives almost no attention in boardroom AI discussions.

Automation increases output. It reduces headcount. It compresses labor costs. In the short term, this looks like margin improvement. In the medium term, it is demand destruction.

Economies run on people having income to spend. When you automate away enough jobs across enough sectors simultaneously, you are not just optimizing your own operation — you are participating in the erosion of your own customer base. Henry Ford's $5 wage was designed to reduce turnover, but the effect was far larger — it helped create a middle class with the purchasing power to drive demand. The current wave of AI enthusiasm seems to have forgotten that lesson entirely.

This is not an argument against AI. It is an argument for thinking at a systems level rather than a balance sheet level.

Two Narratives, Both Unhelpful

The dominant narrative runs in two directions.

The first is relentless optimism — AI as productivity multiplier, competitive advantage, the future of work. This narrative is typically produced by people who benefit from adoption regardless of outcome.

The second is fear — job loss, surveillance, the erosion of human judgment. This narrative isn't wrong, but it tends to paralyze rather than mobilize.

What's missing is a third conversation. One that starts not with the technology, but with the people.

AI can help lighten the bureaucratic load, allowing workers to focus on the relationship with the customer and the quality of services and products.

What does good work feel like? What makes someone effective, trusted, dignified in their role? And then — how might AI support that, rather than route around it? How does this impact the brand experience?

These are not soft questions. They are the only questions that lead anywhere useful.

The Conversation Worth Having

The break we need won't come from a single visionary. It has to be collective — leadership, workers, technologists, in the same room, with the same honesty.

The anxiety inside organizations right now is not an obstacle to that conversation. It's the invitation.

If you're inside an organization wrestling with AI — not sure what to adopt, what to resist, or how to bring your people along — I'd genuinely like to talk.

Not to sell you something. To have the conversation that most organizations need and very few are actually having.

Reach out. Let's start there.