Managing the Risk of Large Language Models Like ChatGPT

Author

Aaron Pinnick

Publish Date

Type

Article

Topics
  • Cybersecurity
  • Compliance
  • Artificial Intelligence (AI)

An Overview of Large Language Models 

Large language models (LLMs) like OpenAI’s “ChatGPT” and Google’s “Bard” are becoming increasingly popular tools among individuals and firms looking to take advantage of the incredible power and efficiency they offer for language-based tasks. These tools collect extremely large amounts of text from a variety of sources – including information entered by users into the tool1 - learn from that text, and then respond to user prompts with human-like responses. Because these tools collect and learn from an incredible amount of data – often billions of parameters – they can be used for a wide range of tasks, from creating a simple email to writing complex programming code.  

However, as the March 20, 2023, leak of ChatGPT logs demonstrates, the retention of user inputs by these tools creates potential privacy and security risks for firms. And with the novelty of these tools and excitement around them, employees are less likely to be aware of and think through the potential risks associated with the tools before they use them for business purposes.   
 
To mitigate these risks for firms, cybersecurity leaders should take several steps, including:

  1. Assess the Risk LLMs Create 
  2. Update the Firm’s Acceptable Use Policy 
  3. Provide Employees with Training and Communications on LLMs 

Assess the Risk LLMs Create 

Before a firm takes a stance on if or when LLMs will be permitted for business purposes, they should understand the risk that these tools pose to the organization. While this assessment will vary from firm to firm, the risk centers primarily on how a firm’s employees choose to use LLMs and the information they include in them.  

Firms should consider the following potential risks when deciding on the firm’s stance on the use of LLMs: 

  • Privacy Risk – As the recent leak of ChatGPT logs demonstrates, the most common risk LLMs present to firms is that employees will enter sensitive information into the tool (e.g., client names and information), and that information is then exposed to the public. This risk will be high for most firms, as the novelty of LLMs means that employees are likely experimenting with the tools and may not exercise necessary caution when entering information into the tool. If this information is exposed, it may create reputational harm for the firm, as well as regulatory risk for companies in certain industries or jurisdictions. And regardless of a leak, the uploading of certain types of data into an unapproved third-party tool (e.g., private health information) could be considered a privacy violation in certain jurisdictions.     
  • Intellectual Property Risk – Since LLMs are designed to learn from the inputs users provide, any proprietary or non-public information that is included in a prompt will be stored by the tool and integrated into future responses provided to other users. Even if propriety information isn’t directly leaked, the LLM could be prompted to respond as if it were an employee at a certain company, and based on what the LLM has learned through past interactions with employees at that company, provide non-public information back to a user. Through this process, individuals could gain insights into the strategic direction of competitors or gain non-public information about the products and services of companies whose employees use an LLM.     
  • Third-Party Risk – Since a core feature of LLMs is their ability to generate extremely large amounts of text quickly, individuals may try to use an LLM as a shortcut in creating client deliverables. Firms should confirm with key vendors if LLMs are used to create any work product or advice that they receive. If LLMs are being used by a third party, the firm should understand what company information is entered into the LLM, as well as how deliverables are screened for quality and accuracy prior to receiving them. Analogously, employees using LLMs in their work for clients may be violating the letter or spirit of the agreements with those clients. 
  • Risks Related to the Quality of the Output – LLMs are often used by individuals to help generate ideas or first drafts of documents or code. But despite the impressive performance of many LLMs, they will make mistakes in their output, with tools like ChatGPT having issued warnings about the chance that it may produce incorrect information. These risks can be mitigated by having an expert review the materials created by an LLM to ensure that they are accurate. But if there isn’t sufficient oversight or controls around how information created by an LLM is used in a final work product, incorrect information may be shared with internal or external stakeholders, leading to potentially flawed decisions and compliance issues.   

It is important to note that LLMs pose a broader risk for cybersecurity executives, as cybercriminals can easily use these tools to create compelling dialogue, phishing email language, and code to improve the effectiveness of cyberattacks. Cybersecurity leaders should be aware of this threat, and ensure that the firm’s policies, procedures, and employee training take this into account.  

Update the Firm’s Acceptable Use Policy 

Firms should review and update their existing Acceptable Use Policies (AUPs) to be sure they specify when and how employees will be permitted to use LLMs on company devices and for business purposes.  

Firms may take several different approaches towards building an AUP for LLMs based on the company’s tolerance for the risk and opportunities these tools present to the firm. These options include: 

  • A Total Ban on the Use of LLMs – The most conservative approach to LLMs is to simply block sites like ChatGPT and Bard on company devices and update the firm’s AUP to make it clear that the use of these tools is not acceptable for employees for any reason.

    While this approach may be appropriate for firms that deal with highly confidential client or company data, it may be difficult to maintain as the number of LLMs available is quickly growing. It also may prove challenging as LLMs are increasingly incorporated into other products like office productivity tools, such as Microsoft Teams. The effort of keeping up with this LLM growth is likely not worth it for most firms, as it also means that the company won’t be able to take advantage of the benefits of LLMs. 
  • Restricted Use of LLMs – Firms that are willing to accept some risk from LLMs in exchange for the potential efficiency gains can allow the use of tools like ChatGPT under certain conditions. These could include: 
    • LLMs can be used for business purposes only if no sensitive, proprietary, or confidential information is included in LLM prompts.
    • LLMs can be used for business purposes only with approval from specific individuals (e.g., the CISO, business unit head, GC). 
    • LLMs can only be used for certain, low risk business activities (e.g., for help writing marketing copy about features and services that are publicly available on the firm’s website).  
    • LLMs cannot be used to create client-facing work products, or to generate guidance for clients.
    • LLM use is allowed for business purposes, but records must be kept of the prompts and outputs from the tool.

      For most firms, some combination of the above clauses and restrictions should help mitigate the risks that LLMs pose to the firm, without stifling the innovative potential of the tools. 
  • Reasonableness Standard towards LLMs – Firms that see the greatest potential in LLMs and are willing to accept the highest level of risk may allow their employees to use their best judgement when working with these tools.

    Firms that take this approach can take a page from their existing training and policies on social media usage to help their employees build good judgement around what information is and isn’t appropriate for LLMs. Employees should be reminded that information entered into LLMs should not be assumed to be private or secure, and nothing that would cause reputational harm to the employee, the firm, or the firm’s clients should be entered into an LLM. 

For a sample AUP on employee use of LLMs, please click here

sample aum for llms

Download

Provide Employees with Training and Communications on LLMs 

Since the core risk LLMs pose to firms is rooted in employee behavior, it is critical that firms provide their employees with clear guidance on when or if these tools are allowed to be used for business purposes. And, since these tools are a hot topic, it is likely that employees have already begun experimenting with them at work or on their personal devices, so cybersecurity leaders shouldn’t wait to begin making these updates. 

Firms should take the following steps to raise employee awareness on the risks of LLMs: 

  • Don’t Wait to Communicate – Even if the firm hasn’t settled on a final AUP for LLMs, it is critical that employees think carefully about what information is entered into an LLM. Senior leaders should immediately begin notifying employees of the risk these tools pose and remind them of basic standards for using these tools (e.g., never enter client information into an LLM).
  • Update Training to Reflect the AUP – Once the firm has settled on its standards for acceptable use of LLMs, the firm’s cybersecurity training should be updated to include guidance on how employees can follow the AUP. This will include ensuring employees are aware of the policy, providing them with clear examples of what is appropriate and what isn’t, and ensuring that employees understand the risks associated with violating the AUP. 
  • Reinforce the LLM Policy – Like all behavioral risks, employees will need to be reminded of the company’s AUP on LLMs. Cybersecurity leaders should begin integrating reminders about the appropriate and inappropriate uses of LLMs into their communications calendar to employees, to help keep the risk front of mind for employees. Adding an interstitial page that employees have to click through to access LLMs on the web creates another opportunity for policy reminders. 

Conclusion 

Tools like ChatGPT and Bard present firms with a unique opportunity to create efficiencies in their workforce. When used properly, they can automate a wide range of time consuming and labor-intensive writing tasks, freeing up time for employees to focus on higher-value work. But like all new technologies, if employees misuse this technology, it poses additional risks to firms.  

The good news for cybersecurity leaders is that they likely have experience managing this type of employee-centered risk, and there likely isn’t a need develop any radical new approaches to dealing with it. Like other behavioral risks, cybersecurity leaders should immediately assess the risk that LLMs pose to their firm, and develop policies, training, and communication to guide employee behavior. The rapidly evolving nature of LLMs may mean that guidance may need to be reviewed and updated more frequently, but the program’s approach to this risk should be straightforward.

Tune in to our Webcast

Join us Tuesday, May 23 at 12pm EST for a lively and interactive webcast all about the cyber and compliance risks of ChatGPT and other large language models.

Register here

How we help

We can help your firm develop, implement, and maintain the required information security program to meet regulatory requirements and industry best practices, including: 

  • Support and advice to build and to assess an organization’s cybersecurity risk, identify cybersecurity program gaps, and draft and execute against a mitigation roadmap.     
  • Policy development, business continuity planning, and impact analysis complete with robust policies, plans, and procedures to better protect your company from data breaches and efficiently recover from a cyber incident or significant business disruption. 

For questions, or to find out how we can help you meet industry best practices contact us here.

 


1. On April 25, 2023, OpenAI announced that users could disable ChatGPT’s chat history, which would prevent user inputted text from being entered into the tools learning model. Data Controls FAQ | OpenAI Help Center