Managing the Cyber and Compliance Risks of ChatGPT and Other Large Language Models
The recent advancements in large language models (LLMs), like OpenAI’s ChatGPT, have become some of the hottest topics of discussion for information security, technology, and compliance executives, as well as general employee populations. These tools offer an unparalleled opportunity for firms to enhance their productivity. With the ability to generate vast amounts of text and respond to natural language prompts with incredible speed, LLMs can revolutionize how businesses operate. However, they also present real privacy and security risks if they are used improperly by employees.
Join Mike Pappacena, Partner with ACA Aponix; Greg Slayton, Director with ACA Aponix, and Michael Abbriano, Managing Director at ACA, on May 23rd from 12:00 – 1:00 PM (ET) for a discussion on the risks and opportunities for firms in using LLMs, and for a chance to ask questions about these tools.
Topics of discussion include:
- The key risks and mitigation techniques firms are using around LLMs (i.e., controls, policies, training).
- How firms are integrating LLMs into their workflows and validating the output of these tools for accuracy.
- Regulatory expectations and oversight for firms that choose to use these tools.
- Raj Bakhru, Chief Strategy Officer, ACA Group
- Greg Slayton, Director, ACA Aponix
- Michael Abbriano, Managing Director, ACA