Petr Kocmich portrait square
4 September 2023

Do you use ChatGPT? Then be careful what you discuss with bots.

While cybersecurity experts were pointing to possible impending security issues with chatbots such as ChatGPT, malicious actors attacked unexpectedly. Instead of hacking directly into these platforms, they have started the massive stealing of users’ credentials from right under their noses. They then sell these on the darknet. This puts users at considerable risk due to the enabled default setting of storing communication history with the chatbot.

According to Group-IB, a global cybersecurity company based in Singapore, it has identified 101,134 compromised ChatGPT credentials in the logs of an info-stealing malware popular with malicious actors over the past year. “This major leak raises serious concerns about the security of information stored within generative artificial intelligence tools, such as ChatGPT, and highlights the potential risks associated with their use,” said Petr Kocmich, Global Cyber Security Delivery Manager at Soitron.

Weaponized info-stealing malware

Most of the attacks were attributed to malwares Racoon, Vidar and Redline. They work just like any other common malware – they steal information from the target computer after the user, often unknowingly, has downloaded and installed the malware disguised as a desired application or file. This sophisticated type of malware is easy to use and available as a subscription-based service, making it a popular choice among attackers: often even amateurs.

Once activated, it collects credentials stored in browsers, bank card details, cryptocurrency wallets, cookies, and browsing history, sending it to the attacker. Since ChatGPT stores all user-conducted conversations by default, the acquisition of login credentials can lead to the conversations being viewed by anyone with access to the account.

Therefore, users should not enter any personal information that can identify them, such as full name, address, date of birth and birth number. Under no circumstances should they enter any credentials (usernames and passwords), financial information (account or credit card numbers), or health information. They should also keep work information confidential, i.e. not discuss any company information. “Users are often unaware that their ChatGPT accounts actually contain a large amount of sensitive information that is desirable to cyber criminals,” warns Kocmich. That is why he suggests disabling the chat history-storing feature unless absolutely necessary. The more data the account contains, the more attractive it is to cybercriminals. Kocmich therefore recommends carefully considering what information to discuss with cloud chatbots and other services.

The solution is to take precautions

The risk is similar to what would happen if attackers were able to breach the protection of the ChatGPT system itself and compromise it. If attackers gain access to the chat history, they can find sensitive information such as credentials to corporate networks, or personal information. In short, everything the victim has ever entered into ChatGPT.

“In response to this security threat it is recommended, in addition to considering disabling the chat history storage, that users change their passwords regularly and use security measures such as two-factor authentication,” says Kocmich. In general, these recommendations should be applied to all Internet services and especially where unauthorised access can cause damage.

Vigilance must come first

This incident demonstrates the urgency of the need to improve security practices in an internet world increasingly relying on artificial intelligence and digital interactions. With cybercriminals developing new and novel tactics, public awareness of cyber risks and how to mitigate them is becoming increasingly important. “Regardless of the tools and techniques you use, be constantly vigilant and apply known security principles and best practices to avoid becoming an easy target,” Kocmich concludes.

Related articles