One Year Later: Managing Security in the Age of ChatGPT

December 5, 2023

After exploding into the public consciousness in November 2022, ChatGPT has sparked interest and investment in generative AI and Large Language Models (LLMs) in record-breaking speed and scale. Enterprises are discovering the many benefits of AI such as automating content creation, enhancing data analysis and improving customer experience.


They are also discovering that those same powerful capabilities can pose significant challenges and risks, such as security, privacy and ethical issues. Governance and oversight of AI have become top concerns of leaders.1 Some companies have used their excitement from ChatGPT as an opportunity to build out a more comprehensive AI governance program that address the risks and benefits of generative AI tools while also enabling the secure adoption of AI. Many more are starting this process in the coming year.2


Generative AI may appear foreign to enterprise IT and security teams at first, but fear not. Generative AI is no different than any other software.3 Organizations should treat Generative AI business applications just like any other modern application. All the security practices and tools that secure modern applications should be utilized to secure and optimize new use cases enabled by Generative AI.


Characteristics of modern applications include:


  • Cloud-hosted and scalable
  • Modular and loosely coupled
  • Continuously deployed


Generative AI usage (also called inference) is accessed via APIs, same as other modern applications. The high composability of the tools and APIs have enabled the rapid spread of LLMs this year. AI is often deployed on Kubernetes in the cloud or on-premises along with other micro services. Multiple private data sources are key to augment LLMs with “ground truth” to mitigate errors known as “hallucination.” These data sources are often distributed across hybrid or multiple cloud environments.


This means that enterprise IT and security teams can use existing proxy-based security solutions such as WAF, DDoS Mitigation, API Security, API Gateway and Ingress services for Kubernetes. These teams can also use existing tools to secure app-to-app connectivity across hybrid and multiple cloud environments.


Deploying the proxy layer in front of both applications and LLMs can mitigate Denial of Service (DoS), data leakage, and provide central enforcement points for security and optimization before end users access Generative AI business applications. These controls can protect against attackers as well as careless runups in expensive API calls.


APIs provide all firms access to LLM inference without the need to maintain expensive AI training and serving platforms. This democratizes the access to these benefits beyond massive technology companies, but it also makes API security a natural focus for companies leveraging Generative AI. This is a great opportunity for enterprise IT and security teams to review their approach to API security to mitigate the risk associated with Generative AI.


While many businesses understand the importance of improving API security, they often struggle to understand exactly how to do so. Here are five API security musts that businesses can work to implement:


  1. AI: Automations driven by AI models can make the difference in pre-empting a problem. While AI is heavily hyped and buzzed about in 2023, the technology has been essential to cybersecurity for years.
  2. API Discovery: Businesses can’t secure what they can’t see. For various reasons, the number of unknown and unmanaged API endpoints is quite large in most environments.
  3. Schema Enforcement: Schemas exist for good reason: they allow businesses to control for expected inputs and outputs to and from APIs. When there is a departure, or a drift, from a schema, it can be for a few different reasons. Perhaps the developers introduced new functionality that did not go through proper security channels; or perhaps vulnerabilities were inadvertently introduced into the API; or perhaps the API has been tampered with somehow. Regardless of the reason, enforcing schemas is a great way for businesses to control access to APIs, the data and back-end processes they expose and improve API security overall.
  4. Exposure of Sensitive Data: Remaining competitive in today’s market requires exposing sensitive data via APIs. Obviously, this exposure should be done in a measured way, as required for business purposes and within acceptable risk. Flagging exposure of sensitive, private and/or confidential data is another important component to an API security approach.
  5. Layer 7 DDoS Protection: APIs live and breathe at layer 7 of the OSI model. While many businesses have DDoS protection at layers 3 and 4 of the OSI model, they may be ill-prepared for a DDoS attack at layer 7. Having the capability to detect and respond to a layer 7 DDoS attack is another important part of an overall API security strategy.


APIs are at the heart of the AI revolutions taking place across the world. While there is a business need to push forward on APIs quickly and agilely, security need not be sacrificed to accomplish this.


With the surge of Generative AI related projects within organizations, this is an opportunity to revisit security capabilities your organization has and to mitigate the risks associated with the use of Generative AI. It is not magic, and existing proxy-based security solutions can cover a lot to secure your Generative AI business applications.



Randy Lariar
Practice Director - Big Data & Analytics | Optiv
Randy leads Optiv’s Big Data and Analytics practice, a part of the Cyber Digital Transformation business unit. He helps large firms to build teams, lead programs, and solve problems at the intersection of technology, data, analytics, operations, and strategy.

Randy leads a cross-functional team of data engineers, data scientists, and cybersecurity professionals who advise and implement value-aligned solutions for large-scale and fast-moving data environments. His clients include top firms managing their cyber-related data as well as organizations seeking to unlock new insights and automations.
Yuichi Miyazaki
Director of Business Development | F5
Yuichi Miyazaki is a visionary leader and a cybersecurity expert who serves as the director of business development at F5, and currently leadx F5's AI ecosystem development. With over two decades of experience in the tech industry, he has a proven track record of creating and executing successful strategies that drive growth and innovation for his customers.
Josh Goldfarb
Security and Fraud Architect | F5
Josh Goldfarb is a security and fraud architect at F5. He applies his analytical methodology to help enterprises build and enhance their network traffic analysis, security operations and incident response capabilities to improve their information security postures.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit