Cybersecurity in the AI Era

May 9, 2023

This is the second blog in a four-part series on AI and cybersecurity. Catch up on the series by reading the first blog - AI and the Art of Avoiding Cyberattacks.

 

As artificial intelligence (AI) continues to grab headlines and appear on every company’s product roadmap, its implications on cybersecurity are becoming increasingly significant. Demonstrations of AI performing automation tasks with human-like capabilities have excited investors, boards and executives of companies of all sizes. Less discussed, but not less impressive, is the fact that AI is equipping cybercriminals with more powerful tools to scale their attacks and deliver more damage. The landscape of cyber threats is evolving, driven by the force of AI. This blog post explores these emerging challenges and suggests ways to fortify our defenses.

 

 

AI-Powered Social Engineering: Trust No One

The detection of spam and phishing emails is one of the most common uses of machine learning. Since as early as 1998, we have seen dozens of publicly available datasets for training various models on spam. Researchers, technology companies and internet service providers have implemented many rules-based and supervised machine learning-based detections. Even so, these systems frequently have false positives (blocking of legitimate emails) and false negatives (spam/phishing content gets through).

 

Many phishing emails have historically had some common markers of fraud. The text of the message would betray that the writer wasn’t a native speaker, or the grammar would be too perfect. An email from a friend or trusted colleague could be detected as fake by the context. With the advent of AI and large language models, it has become immensely easier to create convincing phishing emails. A small amount of scripting can allow scammers to create multiple variations of phishing emails and easily test which ones are most effective against their target persona. A prompt-driven approach can even create customized phishing emails, so marking an individual email as spam/phishing may not be enough to notify and block all variants.

 

According to some research, there is already a significant rise in social engineering attacks attributable to large language models. This will likely continue to increase as various open-source models fall into the hands of scammers. And it’s not just emails: the same sophistication of AI that is enabling deep fakes and voice replication is impacting social engineering by voice calls and text messages. AI's capability to generate realistic-sounding voices has led to an increase in scams—particularly targeting older individuals—impersonating loved ones or authorities to extort money.

 

The detection of deep fakes and LLM-generated text lags behind the models to create the content. Unfortunately, there aren’t many good options for detecting these kinds of deception on their own. User and entity behavior monitoring is an important approach to find unusual activities in your environment that could be due to AI.

 

 

Hacking Co-pilot: Accelerating Criminal Behavior

AI has shown a lot of promise for assisting with code development and error troubleshooting. Language models trained on extensive documentation of how technology works are excellent at providing recommendations for how to use that technology. They’re also very useful for translating between coding languages or describing what a patch of code does. Developers and engineers around the world are experiencing the acceleration of working with computers thanks to large language models. Unfortunately, so are attackers.

 

Even though organizations like OpenAI are doing their best to prevent their AI models from giving out hacking instructions, hackers have numerous tools and strategies to circumvent these controls. Many prompts have been shown to trick AI models into ignoring safety features to access the underlying knowledge in unauthorized ways. These techniques are sufficient to allow some bad behavior, but the real threat will likely come from non-commercial models.

 

Criminals and hobbyists have plenty of open-source tools to train large language models. General task performance in these models will not be as impressive as it is with the gigantic commercial models that OpenAI, Google, and Meta have built, but it doesn’t have to be. Language models can be built just for use with hacking and accelerating criminal tasks. Further, the effort required to modify an existing model is significantly less than building one from scratch. In late February 2023, Meta’s Llama model was leaked. Since then, the generally capable large language model has been modified and trained in several ways. Stamford researchers have released their “Alpaca” variation of the model, which has similar performance to OpenAI’s GPT-3.5 and requires about $600 to train.

 

Many companies are following suit and are using the approach demonstrated by Alpaca to build internal large language models custom trained for their purposes. Criminals are likely doing the same. We expect that there will soon be evidence of models assisting and automating hacking tasks in ever more sophisticated ways. This will decrease the barrier to entry for cybercrime and make existing criminals more efficient.

 

Moreover, researchers have found that AI can be used to generate polymorphic code—a type of code that alters itself every time it runs, making it harder to detect. This code may evade multiple lines of endpoint detection defense by using a language model to drive its mutation. Many cyber detections are driven by pattern matching based on known malware. Code that can use written language to define its capabilities may become a common and effective attacker.

 

 

AI as a New Attack Surface

Social engineering and hacker copilots are examples of existing criminal techniques enhanced by AI. The description wouldn’t be complete without including the addition to the attack surface that new AI will provide. Organizations are rushing to integrate AI into their systems, yet many will inadvertently expose themselves to new types of risk. AI models that are exposed to the internet or placed within internal networks can be attacked in numerous ways. Prompt injections can cause the model to reveal sensitive information or behave in unintended ways. Supply chain attacks can occur on new additions to the company technology environment, such as the GPU hardware, training data, ML software or a corruption of the model itself. Models can be spammed, forced to shut down, corrupted to not be effective, used to steal data or included in broader cyberattacks.

 

Microsoft has announced AI functionality in much of its Office suite. Google is piloting the same in their software. Plugins for browsers and extensions of existing software promise to integrate AI to touch everything we do. Software companies worldwide are building AI into their product roadmaps. Each new model represents new risk and potential exposure.

 

 

Preparing for the AI-Driven Threat Landscape

Recognizing these emerging threats, it is crucial for businesses to stay informed about the latest developments in AI and cybersecurity. Organizations should conduct a thorough review of their cyber readiness for AI-enabled attackers. They should also work to include AI considerations in all parts of their risk and controls processes. Robust policies and controls should be developed in coordination with departments across data governance, privacy, risk, legal, IT and business stakeholders.

 

AI models should be monitored. Every prompt and response should be logged for review and “threat hunting.” A monitoring architecture also enables the creation of automated rules and alerting that can create controls around AI’s use within the firm. Likewise, the application logs of AI-enabled technology should be tracked. This enables a broad range of use cases beyond cybersecurity, including IT observability, user experience, regulatory compliance and product development. Organizations should recognize that they can leverage shared capacities that already exist to solve unique AI-related challenges.

 

Removing AI from an environment will become as impractical as removing employees’ web and email access. A large extension of preparing for AI threats will include end-user education. This should include trainings about what kinds of information should be shared with public AI models and guidance around best practices to avoid blindly accepting untrue statements. It's essential to cultivate an environment of healthy skepticism about AI’s use and equip employees with the skills to recognize and respond to potential threats.

 

Organizations should be sure to keep cybersecurity processes and systems up to date. As AI evolves and new threats emerge, our defenses must evolve, too. This includes regular risk assessments, ongoing reevaluation of potential vulnerabilities, applying updates and software patches, monitoring and fortifying network security, and having robust recovery plans. If Development Security Operations wasn’t already a part of the environment, the widespread use of AI to write code is a call to action. Since large language models will not include information vulnerabilities discovered after they were created, they can easily write risky code. A separate process and tooling should be used to monitor all code, which can go into production or interact with other systems.

 

As much as AI can pose a threat, it can also be a powerful tool for cyber defense. In the next part of this series, we will delve into the real and potential uses of AI in cybersecurity. This forthcoming blog post will address exciting developments that will boost cyber teams and allow organizations to evolve in response to the changing threat landscape.

 

AI is continually changing the way we think about cybersecurity. If you want to discuss what AI means to your business, we are here to help. Learn about Optiv’s big data and analytics services and reach out to have a conversation.

Randy Lariar
Practice Director - Big Data & Analytics | Optiv
Randy leads Optiv’s Big Data and Analytics practice, a part of the Cyber Digital Transformation business unit. He helps large firms to build teams, lead programs, and solve problems at the intersection of technology, data, analytics, operations, and strategy.

Randy leads a cross-functional team of data engineers, data scientists, and cybersecurity professionals who advise and implement value-aligned solutions for large-scale and fast-moving data environments. His clients include top firms managing their cyber-related data as well as organizations seeking to unlock new insights and automations.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.