When the boss discovers ChatGPT, everyone else burns out

Until now, the CEO has often been the most important decision-maker in a company. In the past, these were predominantly older men with secretaries and limited technical knowledge.

But today’s CEOs, who are younger yet somehow look older than they are, can’t resist AI. It’s the trend they feel compelled to ride, like a surfer catching the perfect wave at Bondi. This transformation raises important questions about what sort of tools businesses will truly need in this new landscape.

CEOs are now enthusiastically using ChatGPT, Notebook LLMs, and other tools. Perhaps initially as a search engine, but some have now understood that they can use these tools to evaluate, assess, and improve their own performance and that of their employees.

If you’re unlucky, you might be stuck in one of these old-fashioned top-down companies; meaning the CEO groundlessly distrusts their employees and checks their work with ChatGPT wherever possible. After all, ChatGPT knows everything and even knows better, right? The result: the boss knows everything better, down to the smallest detail (which actually only the closest stakeholders can properly assess).

Great, the CEO can now use AI. That’s super, right? Maybe, but maybe not. Usually, it just adds another layer of super-stress on everyone involved. Because the boss only sees the output, never the thoughts, processes, and struggles for decisions – even if these have been repeatedly adjusted and improved with AI (Prompt: My boss is a control freak. How do I come to a decision that suits them but is also good for the business? My boss ticks like this {characteristics and goals of the boss}.

Even worse: Because the executive sees that everything happens so quickly (regardless of the usefulness and feasibility of the Deepseek report), there’s now an expectation that with the acquisition of tools and a little AI training, everything will happen lightning-fast and much more can be accomplished, productivity will skyrocket, and cheaper staff will be needed.

Anyone who has ever worked in a company immediately sees that it’s not a well-oiled machine that processes everything for hours without thinking. But this image is still firmly lodged in the minds of the old-school generation, to which I also belong. In this well-oiled machine image, everything runs as smoothly as a new Holden (before they stopped making them, of course).

Perhaps AI brings in this lubricant. You might achieve a faster throughput and maybe a higher quality comes out (if you even know how the desired quality is defined and how it can be checked), but ultimately everyone burns out.

We just keep feeding something into the machine and implement what it says. Because we know that in the end, the CEO will also fire up the machine and will naturally find so much more that could be done better (because you can always do something better!). And honestly, I wonder who would want to work in such a machine where you’re just an executor. Not me. But what then?

Perhaps the answer lies in better understanding how AI and human expertise can complement each other rather than competing, and in considering what happens when technology like smart glasses further blurs the line between human and machine work.