From securing code – to securing behaviour

With AI agents, we’re now moving quickly from pilots to production across business and sectors in society. And unlike task-specific automations, AI agents plan, decide, and act across multiple steps and systems on behalf of users and teams. This is not just a technical upgrade. It’s a new paradigm when it comes to security: We are no longer securing code - we need to secure behaviour.

March 19, 2026

With more organisations adopting to AI first strategies without thinking through how to reinforce their security, businesses are creating cybersecurity risks in two dimensions in parallel: increased vulnerability and slower recovery.  

In fact, quite the opposite of the intentions within the NIS2 Directive that many organisations across Europe now are struggling to implement…  

A recent study conducted by Sapio Research on behalf of the tech company Fastly, showed that AI-first organisations (naturally) have a much higher vulnerability rate, but also an average of 80 days longer time to recover after data breaches. This recovery gap translates directly into lost revenue, extended downtime, and prolonged trust damage, with an estimated 135% higher incidents cost for AI-first businesses compared to more traditional organisations.

The main reason for these increased costs is that nobody knows who’s responsible when things go wrong! Traditional accountability is rapidly deteriorating because development teams now include both humans alongside self-thinking, self-learning, and self-acting AI agents.

There are discussions around the security implications of so-called shadow AI, when employees start using AI-tools without current policies or guidelines in place. But even approved AI tools can become a security risk when they are granted extensive automated permissions. When AI tools become privileged parts of organisational infrastructure, it creates new and increased risks, that even may have been lurking beneath the surface for some time, as employees have taken it upon themselves to start using AI tools for work related tasks.

The way forward: Security by design

AI is not here to stay. It’s here to make things move fast. And looking at the AI-potential in innovations and increased productivity, there is no going back. Both organisations and, above all, employees have already become spoiled to the efficiency gains brought about by AI. Instead, we need to focus on a more secure way forward.  

When entering the agentic AI phase, Security by design has moved from an aspirational goal to a survival requirement. In the new OWASP Top 10 framework for Agentic Applications, which due to the fast pace of AI-development is more of a compass than a static framework, there are two pieces of good advice as a starting point:  

- Avoid unnecessary autonomy - deploying agentic behaviour where it is not needed expands the attack surface without adding value.  

- Strong observability becomes non-negotiable - without clear visibility into what agents are doing, why they are doing it, and which tools they are invoking, unnecessary autonomy can quietly expand the attack surface and turn minor issues into system-wide failures.

From minor issues to systemwide failures. Let that sink in for a while…  

Leadership in all organisations need to understand that agentic AI amplifies existing vulnerabilities and introduces entirely new ones because in the agentic systems now being implemented decisions are dynamic and execution paths unpredictable.  

A more AI-secure way forward depends on that all organisations combine a security-by- design-thinking in the implementation of AI. And not just by securing code. We need to secure behaviours throughout our organisations.

Does your organisation need to improve AI security? Fill out the form and we’ll contact you about our Security by Design course.