
Most organisations today are no longer debating whether AI belongs in cybersecurity. It has already become a natural part of modern security tooling – from threat detection to endpoint protection to security operations centres. The real question is how to work with this embedded intelligence in day-to-day operations, and whether security teams can actually leverage its benefits.
Security departments have long been understaffed, the volume of security incidents continues to grow, and the pressure to respond quickly and accurately is constantly increasing. AI therefore enters the scene naturally as a tool intended to help manage the workload and operational pace. In practice, however, the mere presence of AI in security tooling does not solve the shortage of skilled people. Without clearly defined roles for AI and humans, the result is more often a redistribution of workload rather than a reduction.
Today, AI in cybersecurity operates simultaneously on two levels. On the defensive side, it accelerates data analysis, supports event correlation, and provides experts with better visibility into what is happening within the infrastructure. At the same time, the very same technology lowers the barrier to entry for attackers – enabling more convincing phishing campaigns, more effective social engineering, and faster target profiling. This duality increases pressure on security teams and places even greater emphasis on human judgement.
AI delivers the highest return where the challenge lies in volume, repetition, and the need to navigate data quickly. Typical use cases include triage and prioritisation of security events in Security Operations Centres (SOCs), log correlation across tools, anomaly detection in network traffic and identities, and accelerating the initial analysis of malware or suspicious attachments. In practice, this means AI can quickly suggest whether an incident is worth escalating, provide context, and propose an initial hypothesis. This is precisely the kind of work that drains analysts of time and energy while being largely mechanical in nature.
A second area where AI makes sense is as a security copilot for professionals – assisting with the configuration of security controls, generating queries for SIEM (Security Information and Event Management) platforms, summarising incidents into readable reports for management, or drafting initial versions of playbooks and documentation. AI boosts productivity and shortens the time from detection to decision.
A third, highly practical use case is protecting end users from phishing and fraud. By correlating multiple data points – content, sender identity, reputation, communication context, sudden behavioural changes, message timing, and similar factors – AI can reduce false positives and highlight genuinely suspicious messages.

The limitations of AI in cybersecurity are largely the same as the limitations of AI in general – it can hallucinate, appear confident even when reaching incorrect conclusions, and it is sensitive to the quality of input data. However, a security decision is not simply a matter of correlating data. It is a qualified judgement call – what to take offline, what to isolate, how to minimise the business impact, how to engage with a supplier, when and how to escalate. These decisions require deeper understanding, are often legally and reputationally sensitive, and cannot be safely delegated to an AI model.
In practice, the most effective approach is to design the system so that AI handles fast, high-volume, and supportive tasks, while decision-making and accountability remain with humans. AI works best as an investigator’s assistant, not as the investigator themselves.
A well-functioning SOC model typically looks like this: AI assists with data normalisation and correlation, initial alert prioritisation, incident summarisation, and suggesting possible courses of action. The analyst decides what constitutes an actual incident, chooses the response, and bears responsibility for the outcome. Senior roles – the IR lead, threat hunter, and security architect – define detection logic, set policies, maintain playbooks, and validate that automation is not causing harm. Where automated blocking is used, it is advisable to restrict it to narrowly defined, thoroughly tested scenarios with clear safeguards and the ability to roll back quickly.
For generative AI applications, chatbots, or internal assistants, the division of responsibilities is similar – the model can suggest responses and support the operator, but a human oversight layer and continuous monitoring of outputs are essential.
While in other IT domains generative AI is genuinely taking over parts of the workload, in cybersecurity the role of the expert is not diminishing – it is just becoming more refined. Security professionals remain the “brain” of the defence: they set the rules, define detection logic, make response decisions, and is accountable for the outcome.
AI can help relieve them of routine work, but it cannot assume accountability. This is precisely why organisations must recognise that the value of experienced professionals increases with AI adoption – not the other way around. Ignoring this reality leads to a paradoxical situation: technology is meant to increase efficiency, yet in practice it accelerates the departure of key experts from the industry.
From a business and HR perspective, a misleading assumption persists in many organisations – that advances in AI should deliver significant headcount savings. A common plan is to reduce headcount and fully replace junior roles with AI tooling. However, automation of routine tasks simultaneously creates an opportunity for organisations to develop team members’ competencies or reshape and broaden their skill sets.
AI is a powerful tool in cybersecurity when used realistically. It helps where it removes routine work from people, and it fails where it is expected to replace judgement. Organisations that want to benefit from AI in the long term should not only ask what to automate, but also how to protect their experts from overload and burnout. The near future of cybersecurity will be neither purely human nor fully AI-driven. It will be hybrid. And the ability to correctly divide responsibilities between AI and human experts will determine whether technology delivers greater efficiency and responsiveness – or introduces unwanted complexity.
We are in the process of finalizing. If you want to be redirected to our old version of web site, please click here.