Bugcrowd recently released the seventh edition of our annual flagship report, Inside the Mind of a Hacker. This report explores trends in ethical hacking, the motivations behind these hackers, and how organizations are leveraging the hacking community to elevate their security posture. This year’s edition takes a special look at the ways cybersecurity is changing as a result of the mainstream adoption of generative AI. As a part of this exploration, we interviewed David Fairman, CIO and CSO of Netskope. We’ve included a sneak peak of that interview in this blog post. Download the report here to learn more about how hackers are using AI technologies to increase the value of their work.
Tell us a little bit about yourself and Netskope.
I have over 20 years of security experience in a range of disciplines from fraud and financial crime to business continuity to operational risk. I’ve worked for, and consulted to, several large financial institutions and Fortune 500 companies across the globe, been recognized as one of the top CISOs to know, am a published author, an adjunct professor, and was involved in founding several industry alliances with the aim of making it safer to do business in the digital world.
For the past three years, I’ve been Chief Information Officer and Chief Security Officer for the Asia Pacific region at Netskope. Netskope is a global SASE leader helping organizations apply zero trust principles to protect data and modernize their security and network infrastructure. Netskope has been a Bugcrowd customer for over a year.
How are generative AI applications revolutionizing the way organizations operate, and what are the potential cybersecurity risks associated with their use?
AI has been around for many years, so there are a number of risks associated with AI. AI is transforming business through hyper-automation, identifying new business models and trends, speeding up decision making, and increasing customer satisfaction.
Prior to late 2022, AI required specialized skill sets and vast amounts of training data; consequently, it was not used in the mainstream. The launch of ChatGPT made generative AI accessible to the masses. The barrier to entry has lowered, which means the adoption and use of this powerful technology is being taken up at a rapid pace. This means the risks that are associated with AI can have a large impact, more so than ever before.
There are a number of risks that need to be considered, including data poisoning, prompt injection, and model inference—and these are just a few of the technical risks. There are also responsible AI elements that need to be considered, such as bias and fairness, security and privacy, robustness and traceability.
What are the possible ways sensitive data can be inadvertently exposed through generative AI applications, and how can organizations mitigate these risks?
Generative AI uses prompts to take inputs from a user and produce an output based on its logic and learning. Users can input sensitive data, such as personal information and proprietary source code into the large language model (LLM). This information could then be accessed or produced as output for other uses of the LLM. Users should be cognizant of the fact that any data they input into an LLM will be treated as public data.
Many organizations are asking—should we permit our employees to use generative AI applications like ChatGPT or Bard? The answer is yes, but only with the right modern data protection controls in place.
What impact does the use of generative AI have on threat attribution, and could it blur the lines between adversaries, making it challenging for organizations or governments to respond effectively?
There are two sides to this question. On one hand, defenders will be able to use AI to perform threat attribution (and threat intelligence more broadly) to speed up the process, better defend their organizations, and respond more effectively than ever before.
Conversely, threat actors will be using this to their advantage to increase their capability to attack—at a scale and velocity never seen before. We, the defenders, need to lean into how we can leverage this to transform our defensive capabilities.
Could generative AI applications lead to the development of “self-healing systems,” and if so, how might this change the way organizations approach cybersecurity?
I think this has to be the case. I’ve said this for a long time—we need to find ways to operate at machine speed. When we talk about ‘mean time-to-detect’ and ‘mean time-to-contain,’we’re reliant on human beings in the process, which can slow it down significantly. We know that time is critical when it comes to defending an organization—the faster, more efficiently we do this, the better we will protect our companies and customers. Self-healing systems will be one piece in this jigsaw puzzle.
As generative AI becomes more prevalent in cybersecurity, how do you think the role of security professionals will evolve, and what implications does a future with more human-machine collaboration have for informed decision making in cybersecurity?
I think cyber practitioners increasingly become the ‘trainers’ of AI—using their cyber expertise to train models to perform cyber analysis at pace and at scale. There will always be a need to have a human in the loop in some respect, whether that be in the training of the model, the monitoring and supervision of the model (to ensure that it is behaving the way it is expected and is not being manipulated), or in the generation of new models.