Companies need to have ChatGPT policies: what should they include?

Recently, applications using Large Language Models (LLM) such as OpenAI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard have been growing in popularity. These tools are fast, easy to use, and available to anyone. It is not surprising that employees of large and small companies have started to use them. There is nothing wrong with that, but one important thing should not be omitted.

ChatGPT and similar machine learning-based applications are gaining traction in corporate environments. They are used for a variety of purposes:

  • content creation – they can draft a high-quality presentation and write a surprisingly good speech, blog post, email, or comment.
  • creative idea generation – within seconds, they can generate a list of possible questions or answers on a given topic and help you come up with a title for an article, presentation, or business plan.
  • checking and revising texts – they correct grammatical errors and can shorten or expand information according to the user’s wishes, change the style to a more formal or colloquial one, and generally improve the quality of a text.
  • information search – like Google or Wikipedia, they are used to search for information.
  • programming – they become a common tool for coding and code reviewing.

“LLM-based systems are already contributing to improving corporate content, helping employees with various tasks, and even participating in decision-making processes,” says Martin Lohnert, a cybersecurity specialist at Soitron; however, with the adoption of these disruptive technologies come certain risks that users and organizations are often unaware of because of the initial excitement.

The risks of using ChatGPT in a corporate environment

The immediate benefits of LLM-based tools are so great that curiosity and excitement often outweigh caution. Nonetheless, there are several risks associated with the use of ChatGPT in companies:

Protection of personal and sensitive data

When using ChatGPT in a corporate environment, personal or confidential data may be inadvertently shared. Users often enter this data into the tool without knowing that it is shared with a third party. A case has already been reported where a bug in ChatGPT allowed users to see other users’ data, such as chat history.

Intellectual property

LLM training is based on the processing of large amounts of diverse data of unknown origin, which may include copyrighted and proprietary material. Using any outputs based on this data may lead to ownership and licensing disputes between the company and the owners of the content that was used to train ChatGPT.

Malicious or vulnerable code

Computer code generated by artificial intelligence (AI) may have vulnerabilities or malicious components, which can lead to subsequent use and the propagation of such vulnerabilities in corporate systems.

Incorrect and inaccurate outputs

AI tools of the current generation sometimes provide inaccurate or completely incorrect information. There have been cases where the outputs had distorted, discriminatory, or illegal content.

Ethical and reputational risks

Using and sharing incorrect ChatGPT outputs in corporate communication can lead to ethical and reputational risks for the company.

The need for a ChatGPT policy

Given these risks, it is essential to define the rules on how employees can (and should) use ChatGPT when doing their job. “A corporate policy should serve as a compass to guide the company and its employees through the maze of AI systems ethically, responsibly, and in compliance with laws and regulations,” says Lohnert.

When defining a corporate policy, it is first necessary to determine what technologies it should cover. Should the policy apply specifically to ChatGPT or to generative AI tools in general? Does it also cover third-party tools that may incorporate AI elements or even the development of similar solutions?

What a ChatGPT policy should include

A ChatGPT policy should begin with a commitment to privacy and security when working with similar tools, and it should set boundaries by clearly defining acceptable and unacceptable uses of the technology.

It should define uses that are permitted in the organization without restriction. “This can include various types of marketing activities, such as reviewing materials for public use and generating ideas or initial material for further development,” says Lohnert. In doing so, it is important to carefully consider the legal aspects of possible intellectual property infringement and be cautious about the known pitfalls of inaccuracy and misinformation.

The second group of rules which the policy should include involve scenarios where use is allowed with more authorization. Typically, these are cases where the output from ChatGPT needs to be assessed by an expert before it can be used (e.g. computer code).

The third category involves scenarios where its use is forbidden. This should include all other uses, especially those where users enter anything containing sensitive data (e.g. trade secrets, personal data, technical information, and custom code) into ChatGPT.

A good servant but a bad master

An LLM use policy should be “tailored” to each company after thoroughly identifying any associated potential risks, threats, and impacts. “This will allow your company to quickly harness the potential of the new AI-based tools, while formulating a strategy to integrate them into the existing corporate environment,” concludes Lohnert.

Related articles