Petr Kocmich portrait square
20 February 2023

The ChatGPT AI chatbot could be a gamechanger in the cybersecurity, experts say

From the surgical debugging of programming code, to instantly writing an entire block of functional code, and the stopping of cybercriminals, OpenAI’s newly launched popular ChatGPT AI chatbot is changing the game and its capabilities are virtually limitless. And not just in IT.

It has only been around since 30 November last year, but in just a few months it has already been discovered by millions of people around the world. We are talking about an artificial intelligence platform able to answer any question and help with various problems. ChatGPT can answer any general question; write letters, poems, and articles; and even debug and write programming code.

How the ChatGPT AI robot works

This conversational chatbot, backed also by the well-known visionary Elon Musk, who has been involved in AI for years, was developed by OpenAI. ChatGPT is designed to interact with humans in an entertaining way and answer their questions using natural language, which has made it an instant hit among professionals as well as the general public. It works by analysing huge amounts of text. Most of the texts were sourced from the internet, but the chatbot is currently not connected online, which means it won’t tell you the result of yesterday’s Sparta vs Pardubice game. It sees the interaction with the user in context, and hence it can tailor its response to be relevant to the situation. In this way, everyone can learn something.

Experts even suggest that the AI chatbot has the potential to replace the Google search function in the future: “Another very promising feature is its ability to write programming code in any user-selected programming language. This helps developers work on and debug their code, and it helps experts secure their systems,” points out Petr Kocmich, the Global Cyber Security Delivery Manager at Soitron.

How ChatGPT can be used by developers

Today, writing code is not a problem for ChatGPT. What is more, it is absolutely free. On the other hand – at least for now – it is advisable to avoid having the chatbot generate complete codes, especially those that are linked to other codes. The current form of the platform is still in the early stages of development, so it is naive for programmers to expect it to do all the work for them. Having said that, coders and developers can still find the tool useful.

They can use it to find bugs in the code they have written. And they can also finetune a problematic code they had spent long hours writing. ChatGPT can help find a bug or a potential problem, and it can offer a possible solution to end those sleepless nights. Its ample computing power saves hours of debugging work and can even help develop source code to test the entire IT infrastructure.

There are some risks

Without much exaggeration, it could be said that ChatGPT can turn anyone into a cybercriminal, making it easier to carry out a ransomware, phishing, or malware attack. It may seem that the AI robot just needs to be asked to “generate the code for a ransomware attack” and then you just wait for the result. But, as Kocmich points out, it’s fortunately not that easy: “Conversations are regularly checked by AI trainers, and responses to this type of query, as well as other potentially harmful queries, are restricted by ChatGPT. Actually, the chatbot responds by saying that it does not support any illegal activities.”

On the other hand, even if it evaluates a question to be potentially harmful and thus refuses to give an answer, this does not necessarily mean that people can’t get to the answer some other way. “The problem with these safeguards is that they rely on the AI recognizing that the user is trying to generate malicious code; however, the true intent may be hidden by rephrasing the question or breaking it into multiple steps,” says Kocmich. Moreover, nobody can guarantee that some other similar AI robot would not refuse to answer such a question.

What to think about ChatGPT


As is often the case, there are two sides to every coin. While AI bots can be exploited by cybercriminals, they can also be used to defend against them. In the meantime, coders could gradually turn into “poets”. They would tell the AI chatbot that they need to write such-and-such a code that does this and doesn’t do that, or describe the same in a case study, and then they just wait for the AI bot to generate the code.

“Already, ChatGPT is being used by security teams around the world for defensive purposes such as code testing, reducing the potential for cyber-attacks by increasing the current level of security in organizations, and training – such as for increasing security awareness,” says Kocmich, adding in the same breath that we should always bear in mind that no tool or software is inherently bad until it is misused.

Related articles