Answering Key Questions About Embracing AI in Cybersecurity

October 12, 2023

As we witness a growing number of cyberattacks and data breaches, the demand for advanced cybersecurity solutions is becoming critical. Artificial intelligence (AI) has emerged as a powerful contender to help solve pressing cybersecurity problems. Let’s explore the benefits, challenges and potential risks of AI in cybersecurity.

 

 

Are we in an AI revolution?

Innovations in AI have propelled us into an AI revolution, leading to significant advancements in natural language processing, computer vision and decision-making capabilities. AI systems are becoming more general and human-competitive in a wide range of tasks. For example, the recent breakthroughs of large language models (LLM) and generative AI, such as ChatGPT and GPT-4, can do an amazing job writing an article, creating code, drawings and even passing a bar exam. It is becoming clear that these advances will have a profound impact on our society, including the potential to revolutionize the world of cybersecurity.

 

 

What are the benefits of using AI to solve cybersecurity problems?

Cybersecurity, in many instances, resembles the task of searching for needles in haystacks. With the implementation of AI, the process can become more efficient and scalable, as AI excels at identifying patterns and conducting data analysis on a large scale. AI offers various advantages in addressing cybersecurity challenges, including:

 

  • Detection of unknown, zero-day threats and anomalous behavior patterns, complementing heuristic and signature-based approaches
  • Automatic classification and discovery of sensitive data and enterprise digital assets
  • Simplification of complex policy configurations and management tasks, reducing the workload for cybersecurity professionals
  • Identification of truly suspicious users and devices by efficiently analyzing large volumes of alerts and logs
  • Providing additional intelligence around security incidents and recommending effective ways to respond.

 

These benefits collectively lead to a more proactive and efficient approach to maintaining security and reducing cyberattack risks. With the help of AI-powered tools, security professionals will become much more productive in identifying bad actors and conducting threat investigations.

 

 

What are the challenges and risks of using AI?

Cybersecurity has its own challenges when it comes to the adoption of AI and machine learning technology.

 

  • False positives and negatives: AI may sometimes incorrectly classify threats or anomalous behavior, leading to either false alarms or missed threats. The costs of false positives and false negatives can be significant, presenting considerable challenges for AI models. General-purpose AI, without customized training and tuning, may not meet the required accuracy standards.
  • Privacy concerns: AI systems need access to large amounts of data to train and improve their algorithms, which raises questions around data privacy and protection. In addition, the use of AI in cybersecurity often involves the processing of sensitive and personal information, which must be handled with care to ensure compliance with data privacy regulations. The recent incident involving Samsung workers accidentally leaking trade secrets via ChatGPT serves as a reminder of the importance of data privacy in the context of AI cybersecurity. At Netskope, we take these concerns seriously and prioritize the responsible use of data in all our AI applications. We don’t use customers’ data for AI training unless they give us explicit permission.
  • Explainability and interpretability: In cybersecurity, it is crucial to understand how AI makes its decisions to ensure that the outcomes produced are reliable and consistent. However, some AI models can be highly complex and difficult to interpret, which makes it hard for security teams to understand how the AI is arriving at its conclusions. This challenge is compounded in the case of generative AI, which can produce highly complex and intricate patterns that may be difficult for humans to interpret and understand.
  • Vulnerability to adversarial attacks: Hackers may attempt to exploit AI models by creating adversarial examples, making the AI model misclassify inputs and compromising its effectiveness. This risk is especially pronounced in the age of generative AI like ChatGPT/GPT-4, which can produce highly realistic synthetic data that may be difficult for AI security systems to distinguish from legitimate data.

 

While AI can certainly bring significant benefits to cybersecurity, it is essential to recognize and address the potential challenges and risks associated with its use, especially in the age of generative AI like ChatGPT/GPT-4. By acknowledging and addressing these challenges, security teams can ensure that AI is used responsibly and effectively in the fight against cyber threats.

 

 

Are hackers using AI to their advantage?

I have no doubt that some cyber criminals are also exploiting AI for their benefit. As illustrated in this blog post, they can use AI technologies to improve their attack strategies and develop more sophisticated malware. This includes using AI to accelerate vulnerability exploitation, create self-propagating malware and automate the extraction of valuable information. In addition, attackers can use a tool like ChatGPT to improve their social engineering skills. It can help them to write specific texts that would be used on phishing emails, redirecting victims to malicious websites or luring them into downloading attached malware.

 

Consequently, organizations must stay vigilant and adopt advanced AI-powered solutions to stay ahead of the ever-evolving cyber threat landscape.

 

 

There has been a lot of discussion lately about possible risks to humankind due to AI. Is the AI Labs team committed to acting responsibly?

To ensure safety and security, we need to work together as a society — AI developers, businesses, governments and individuals — to make AI systems more accurate, transparent, interpretable and reliable while minimizing privacy and security risks.

 

Netskope AI Labs is committed to the responsible use of artificial intelligence and machine learning. We work with peers, academia, thought leaders and governments alike to safely direct AI efforts in a way that will benefit and not cause harm to our customers, partners, employees and their families. We take precautions and maintain a posture of transparency in our efforts.

 

 

Summary

We are currently witnessing a golden age for AI research and development, as AI technologies continue to improve and gain unprecedented power with each passing day. We are thrilled about the potential of AI in the realm of cybersecurity. Organizations should embrace AI and invest in comprehensive security frameworks that include AI solutions while addressing various challenges and potential risks, ensuring a secured adoption of AI in their cyber defense arsenal.

 

To learn more about how we use AI to solve different cybersecurity problems at Netskope, visit Netskope AI Labs.

Yihua Liao, PHD
Head of AI Labs | Netskope
Dr. Yihua Liao is the Head of AI Labs at Netskope. His team Develops cutting-edge AI/ML technology to tackle many challenging problems in cloud security, including data loss prevention, malware and threat protection, and user/entity behavior analytics. Previously, he led data science teams at Uber and Facebook.

Optiv Security: Secure greatness.®

Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.