Despite AI: James, Sarah and David are staying. Only Liam needs to worry
James is the boss. He’s worked his way up over the years and has been the decision-maker for six years. For a big project, he needs information by tomorrow. He calls Sarah, with whom he has been working very trustfully for a while and can rely on her to deliver the desired information accurately, correctly, and comprehensibly. As usual, James has no time. He only throws a few keywords at Sarah—like ordering a flat white with one sugar. But because Sarah has worked so well with him, she knows pretty much exactly what he wants and how he needs it. This human expertise, even in a world increasingly dominated by AI, remains irreplaceably valuable.
Sarah thinks, looks up some information, but needs support from the specialist department. There she calls David. David is the undisputed expert for what she needs. She comes to his office, begins to explain that James needs something, but David interrupts her very quickly and says: “Yes, yes, I know exactly what he needs, I’ll prepare it for you and how would you like it, in what format?”
A few hours later, Sarah gets the information, prepares it a bit more and sends it back to James.
Why James can rely on Sarah
James can rely on both Sarah and David knowing exactly what they’re talking about. He only needs to briefly look at the data and information, skim through it, copy it into his PowerPoint presentation, and go to his important meeting tomorrow. This is a completely normal process that happens everywhere and constantly.
What happens now when AI comes into play? Are all those involved in danger of soon becoming unemployed? Of course, one could say that artificial agents and Large Language Models are now taking over everything. But that’s exactly how I don’t want to tell this story.
This is particularly relevant as we see new technologies like AI-based smart glasses entering the workplace, potentially changing how we interact with information and expertise.
James, Sarah and David worry about their future
For many who don’t really know their field or who want to learn something new, the results of AI generation are pure magic. Even the first result of a ChatGPT input reads impressively. But how does it actually work? That’s why I started with the example of James, Sarah, and David. They are nothing other than human agents and Large Language Models.
In our case, James is the human who needs the information to make far-reaching decisions. He is responsible if the decision was good or not. In times of artificial intelligence, there will always be a James. Because a machine will never be responsible for the decision and certainly will not be liable for the far-reaching consequences. How could it be? I’ve never seen a machine behind bars in prison.
At the moment, James relies on Sarah. And rightly so. Because he trusts her. He knows she delivers. Sarah could be a project manager, a middle manager, an assistant, or whatever. Those who have such good employees as Sarah can rely on them. If James were to give just a few keywords to Liam, who only started as an intern two months ago, he would never get his information.
Although Liam is super smart and was the top of his class at Melbourne Uni. Liam has little experience and only limited people skills, would have no idea what James wanted from him. Also, he would quite underestimate David because Liam believes he already knows everything better. Only with age do you realise that there is so much you will never learn.
Is there an AI agent in Sarah?
In our case, Sarah is a kind of AI agent. She looks at what James needs, translates it for herself, and then considers where she can get it and who can help her. She understands the context. And that’s the most important thing for artificial intelligence. Without context, none of this can work. But without context, it can’t work in a normal company either. So she knows where to get the information.
Sarah finds some information by herself. If Sarah were working in a RAG (Retrieval Augmented Generation) system, she would be the “retriever” and also the “generator”.
In our example, David is a well-trained Large Language Model. Like ChatGPT, Claude, but also very special models that you can download for free and run on your own computer. David would be more of a local model that no one else has access to. He has fed his Large Language Model with knowledge and experiences for 30 years. He knows his stuff perfectly. He’s not a good handyman, probably has to call an electrician if he wants to change a socket. But in his field, he knows his stuff perfectly.
Is David perhaps an LLM himself?
And not only that, David not only knows his field, but he is also able to transfer it to his business. Because the company needs David’s expertise to create products or develop other things. As an expert, David has seen and heard it all. He attends all conferences, reads all professional papers, he simply knows his stuff. The company sent him to Sydney, Melbourne, and even Singapore last year for various symposiums. And actually, life in the company is great for him because most requests are pretty trivial.
When Sarah comes to his office and asks the question, all the synapses in his head light up and he immediately knows what she needs. That’s his experience, that’s what he’s trained himself to do, he stays on it. So he is a very valuable Large Language Model for his company. He doesn’t even need ChatGPT for it, because although ChatGPT also has this expertise, he simply has far more knowledge than the generic ChatGPT, which has gathered the knowledge of all experts.
David knows exactly what is relevant and what is important. That is his value and his value will always remain so, at least as long as James is his boss. Should it ever be Liam, it will be difficult. Liam probably prefers to rely on new technologies and trends.
What David knows, Liam will never know. He only has access. Is that enough?
Sometimes David even looks in ChatGPT and finds new aspects that he can then think about further. So he keeps training his model. When Sarah starts with her questions, then it rattles in his head. And she doesn’t even need to finish the question. He’s already coming up with an answer.
And that’s also the characteristic of Large Language Models. Because they know so much, it’s very easy for David to know what’s likely to come next. Then he can collect his knowledge and either give it to Sarah in a big list or he creates a kind of condensed paper from it, which then even James understands, which then contains a summary and the most important data.
That has taken David a lot of time in the past, but now he can use artificial intelligence to do it faster and more accurately. He’s not necessarily the great writer, but artificial intelligence helps him with that. He just uses a different model for it. But he is the main model and he will remain so.
Sarah now takes everything, checks it again, looks to see if David, who is sometimes a bit sloppy despite his expertise, a sloppy genius, so to speak, has done everything right, whether the commas are in the right place and whether it is consistent. Because that is also her task as an assistant and as an AI assistant, and she gives it back to James.
James, Sarah, David: With AI knowledge, you’re even more valuable
James is happy, it was quite quick and it was exactly what he needs. I’m not talking about James now only working with AI agents or with ChatGPT and so on. He would never have found out because, for example, many of the data that David presented are not in ChatGPT at all, but in other databases or the company’s intranet, which only David has access to and knows how to navigate—like finding a shortcut through the back streets of Brisbane.
But from this example, it is already clear that processes can become faster in many places in the company. In many places, AI can make knowledge out of text and data and ideas and prepare it in such a way that the people who continue to be the impulse givers and decision-makers remain at the wheel - but only if they understand exactly what AI can do and what David should continue to do.
This principle contrasts sharply with what happens when the boss becomes enamoured with AI tools without understanding their limitations or the continued need for human expertise.