A Single Partner for Everything You Need Optiv works with more than 450 world-class security technology partners. By putting you at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can.
We Are Optiv Greatness is every team working toward a common goal. Winning in spite of cyber threats and overcoming challenges in spite of them. It’s building for a future that only you can create or simply coming home in time for dinner. However you define greatness, Optiv is in your corner. We manage cyber risk so you can secure your full potential.
Breadcrumb Home Insights Source Zero Partnering to Achieve Low-Risk AI May 10, 2023 Partnering to Achieve Low-Risk AI Introduction In this blog post, we will explore how security organizations can successfully partner with development organizations to capture the power of AI-based systems within an enterprise while mitigating the risks. The Power of AI Let’s first level set on the fact that artificial intelligence (AI) and machine learning (ML) are powerful. We have seen the use of publicly available tools such as ChatGPT explode over the last few quarters. Why? Because they are useful! People are using such tools for anything from writing college essays, to writing code, to solving complex engineering problems. The increased use of AI engines put us on the precipice of incredible breakthroughs. In recent weeks, a study by The Journal of the American Medical Association noted that nearly 80% of patients preferred medical advice given by an AI chatbot over the advice given by actual doctors. This is but one example where AI technology can vastly improve operational efficiencies and potentially patient outcomes. The Risk of AI Despite the benefits of AI/ML, we must also consider the risk involved. Let’s zero in on machine learning, which is the ingestion of data to train the tool. When the data is ingested, it is used to refine the tool and its operation. Some recent examples of exploitations of these risks include: Data Disclosure to a Public AI – Consider the recent disclosure that on numerous occasions, Samsung developers used ChatGPT to review confidential code. In doing that, code was ingested into the ChatGPT dataset and could be subsequently retrieved. Data Poisoning to a Public AI – Recently, Chinese firm Tencent demonstrated a technique to train a Tesla vehicle’s onboard AI to drive into oncoming traffic. It’s important to understand that AI as it exists today leverages IO (input/output), which means that ultimately, what goes in may eventually come out. Let’s pause for a minute on the research being done by Tencent. If the onboard AI of a vehicle can be tricked into performing arbitrary actions through specially crafted inputs to its sensors, imagine the ramifications. Imagine a bad actor injecting malicious images into a freeway sign on a major U.S. interstate highway where there is a high prevalence of AI driven vehicles. Because of these risks, OWASP (Open Web Application Security Project) has developed the Machine Learning Top 10 list. This list provides a current look at risks as they exist today. However, as this is the first release of the ML Top 10, it is expected that these will change and evolve over time. ML01:2023 Adversarial Attack ML02:2023 Data Poisoning Attack ML03:2023 Model Inversion Attack ML04:2023 Membership Inference Attack ML05:2023 Model Stealing ML06:2023 Corrupted Packages ML07:2023 Transfer Learning Attack ML08:2023 Model Skewing ML09:2023 Output Integrity Attack ML10:2023 Neural Net Reprogramming Partnering Between Security and Developers So how do security and development teams – two teams who are often at odds – work together to achieve the promised benefit of AL/ML while minimizing the risks? Let’s break down some recommended strategies. Awareness and education – Few people actually intend to create a major breach at their organization. As security practitioners, we need to say “N-O” less, and “K-N-O-W” more. Educate developers on supply chain security best practices. For example, avoid using pre-built models from untrusted sources, as they may contain backdoors, malicious code (which can execute immediately after loading them in memory), and biases that give attackers an advantage. Many development education programs, whether it's a bootcamp or a collegiate endeavor, often minimize security within their curriculum. While they typically include some superficial level of education – perhaps dealing with the OWASP Top 10 – it is also important to make this information meaningful for developers and show them the consequence of any given vulnerability. If they can see why a security control is important, then they will be much more likely to be a partner in implementing that control. Developers may often find themselves primarily concerned with pushing new features to production, and it is key that they be aware of the consequences of missing controls in governance, portfolio management, and other components external to the DevOps pipeline. Also critical is targeted education on specific controls or tools that will minimize disruption to the development process. Threat Modeling – Threat modeling and architecture reviews of AI/ML systems ensure that the system as a whole is hardened and resilient against adversarial attacks. This type of review can also be key to understanding the flow of data into and out of AI/ML systems. Threat modeling can allow an organization to minimize its risk by limiting or anonymizing data flowing into external AI-based systems. Code reviews of AI/ML applications are valuable when considering the set of attacks specific to AI/ML systems. Pen testing / red teaming of critical AI/ML systems are always helpful for finding key vulnerabilities. Training on adversarial learning techniques can allow security professionals to think like the adversary and, likewise, allow AI/ML systems to self-defend. Conclusion By considering the risks and taking a common sense, security-first approach, an organization can unleash the power of AI within their organization while minimizing the risks. Interested in learning more about AI? Check out this blog post by Randy Lariar, "AI and the Art of Avoiding Cyberattacks." By: Nick Hawley Practice Manager | Optiv Nick Hawley is an experienced senior security leader with over 20 years of experience spanning multiple industry segments. This experience includes working both in industry and services. He has worked with multiple modalities within the security space, including Security Operations, Network Security, Application Security, Threat Modeling, and Secure SDLC. Over this time, Nick has developed into a strong people leader and thought leader. Share: Red Team Source Zero® Application Security Optiv Artificial Intelligence Machine Learning Security Training data disclosure security controls Optiv Security: Secure greatness.® Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.
Optiv Security: Secure greatness.® Optiv is the cyber advisory and solutions leader, delivering strategic and technical expertise to nearly 6,000 companies across every major industry. We partner with organizations to advise, deploy and operate complete cybersecurity programs from strategy and managed security services to risk, integration and technology solutions. With clients at the center of our unmatched ecosystem of people, products, partners and programs, we accelerate business progress like no other company can. At Optiv, we manage cyber risk so you can secure your full potential. For more information, visit www.optiv.com.