top of page

Making Agentic AI a ‘Super User’ Could be a Massive Strategic Mistake

  • nicolaferraritest
  • Sep 24
  • 5 min read
September 2025, Tom Pepper, Partner
Published on: Management Today




The temptation to delegate decision making to intelligent systems might appear irresistible. But, as leading cyber security expert and UK government advisor Tom Pepper explains, businesses haven’t even begun to consider the critical risks involved.

Agentic AI is widely – and rightly – championed as a technology that will revolutionise the way we do business. The expected impact on capability, capacity, cost reduction, efficiency and productivity is enough to intrigue even the most cautious CEO, but these systems will also transform the decision-making process in ways that could profoundly alienate trusted suppliers, valued customers and powerful regulators.


The attraction of agentic AI is obvious. It is designed to make decisions autonomously and act, with minimal human supervision, in alignment with goals set by management. The technology has already made significant inroads in such sectors as finance, healthcare, logistics, manufacturing and retail. But the concern for Tom Pepper, partner at cyber-risk agency Avella, is that “these systems are not just a new set of tools, they also effectively bring a new kind of decision-maker into the business.” 


These new decision makers may well have little understanding of company culture, critical corporate relationships or long-term strategic objectives. “When agentic AI operates within your systems, it has the privileges of a super-user, so it doesn’t follow rules, it interprets goals,” says Pepper, who is also security head at the UK government’s AI Security Institute. That means the choices it makes may well prove to be unpredictable, unsafe or illegal. 


Earlier this year, US software engineer Prakash Thakur used agentic technology to simulate a virtual restaurant and found that it was 90% accurate, but often misinterpreted customer requests for onion rings as extra onions and was particularly error prone when he tried to enter orders for five or more items. As attorney Dazza Greenwood observed: “If you have a 10% error rate with ‘add onions,’ that to me is nowhere near release. Work your systems out so that you're not inflicting harm on people to start with.” 


That sort of lesson could have saved Air Canada much embarrassment when its chatbot wrongly told a passenger he could get a discount to fly to his grandmother’s funeral. The airline’s insistence that it was not responsible for the bot’s advice was thrown out in court in February 2024.


Unintended consequences

One way to understand the risks posed by agentic AI, Pepper suggests, is to explore the likely unintended consequences from one relatively common corporate goal. “Let’s say that you ask the system to reduce supplier costs by 10-15%. Because AI does not understand nuance, there are [a number of] key risks here. 


“Let’s start with over-optimisation at any cost. Agentic AI might conclude the quickest way to achieve its goal is to automatically terminate smaller suppliers without due diligence, regardless of how critical they are, which could cut off a supplier that provides essential security patches or maintenance services. The systems might also take unethical shortcuts to speed up negotiations, scraping competitor pricing data from the internet, inadvertently breaching copyright, confidentiality agreements and regulatory law.”


Pepper adds: “Given the authority to auto-renew or terminate contracts, agentic AI might exploit loopholes by manipulating invoicing or enforcing penalties unfairly, triggering disputes and reputational damage. And there could well be cascading effects. Because these systems often work autonomously across multiple platforms (ERP, email, procurement portals), agentic AI’s single-minded optimisation could impact HR, finance, and compliance systems before anyone notices.” 


The risks loom even larger, Pepper says, when companies apply agentic AI to their supply chains: “These agents usually rely on third-party tools, APIs, plugins and open-source packages which can be stacked in layers that can be hard to monitor, audit or even understand.” 


As the AI industry’s current business model seems to be based on teams of agents collaborating within the organisation – and across external networks – the issues of responsibility, accountability and liability could become exponentially more complicated. Lawyers are talking vaguely about a ‘judge’ agent that resolves such disputes, but there is absolutely no consensus as to how that might be created, who would develop it and why any stakeholder would trust it.


And the clock is ticking. By 2029, according to research firm Gartner, agentic AI will be handling eight out of 10 common customer enquiries, which doesn’t give companies much time to get a grip on this rapidly developing technology.


Devastating attacks

The tech industry's preferred narrative around AI in general – and agentic AI in particular – is that it will manage the business in such a way as to free executives up to think about the big picture. There is certainly some truth in that – especially when it comes to more cumbersome tasks such as recruitment and supplier audits – but, to Pepper’s point, senior managers won’t thank agentic AI if, in pursuit of the goals it has been set, it changes the payment terms for a valued customer. You could argue that the best way to avoid such catastrophes is to imagine what havoc your greenest intern could wreak on your most important suppliers and clients and incorporate those constraints into the systems you build.


Agencies like Avella certainly have a vested interest in highlighting this threat, but that doesn’t necessarily mean they are gilding the lily. Only a few months ago, a hacker inserted destructive commands into Amazon’s AI coding agent with a simple prompt instructing it to restore the system to a near-factory state and delete files and cloud resources. The hacker insisted that the stunt was to expose Amazon’s “AI cybersecurity theatre” and was astonished when their account was appointed as a system administrator.  As Pepper points out, the attack wasn’t as devastating as it could have been because there was an error in the rogue code. 


Disney wasn’t as lucky in 2024, losing 1.1 terabytes of company data after an employee downloaded an AI art generation program onto their computer without realising that it included a malicious code that gave its creator, Ryan Mitchell Kramer, unauthorised access to the media giant’s systems, computers and messaging platform. Kramer, who posed as a Russian hacktivist to threaten the employee, pleaded guilty in court and is awaiting sentence.


Agentic AI may have created such threats, Pepper argues, but that doesn’t necessarily mean that it can resolve them by itself. The suppliers’ pitch about these systems requiring hardly any human oversight has been somewhat oversold: “Companies need to create a ‘red team’ which monitors intentions, not just outcomes, can scrutinise decisions and decide whether a system’s privileged user status should be suspended or revoked. They also need to build a clear AI asset inventory that contains every model, every data source, and every plugin or tool in use, providing a full map of what the system depends on and who built it”. 


As Arthur C. Clarke, author of the novel 2001: A Space Odyssey, memorably observed: “Any sufficiently advanced technology is indistinguishable from magic.” That will certainly apply to agentic AI which, even at this relatively new stage, encompasses both OpenAI’s latest ChatGPT release and IBM’s Watson solution for business integration. The relentless improvement in agentic AI encourages executives to dwell on the technology itself but the best CEOs will, Pepper argues, consider the human factor, resist mission creep and keep asking: “Is it doing what we wanted it to do?”

bottom of page