Generative AI: Cybersecurity Friend And Foe

0

It’s clear that artificial intelligence has moved beyond merely a curiosity of the future, as generative AI tools like OpenAI’s ChatGPT chatbot, DALL-E2 image generator, and the CarynAI and Replika virtual companions are being adopted by everyone from lonely people engaged in virtual love relationships to people creating aspirational photos for their social media profile picture. On the business front, CEOs envision the impact of generative AI on their companies in areas as varied as data analytics, customer service, travel arrangements, marketing, and writing code.

In the world of cybersecurity, AI is creating just as much of a buzz. The RSA Conference is the largest cybersecurity conference in the world. Held in San Francisco in April, it included perspectives on the risks and benefits of AI by government officials from the U.S. Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and the National Aeronautics and Space Administration (NASA), and others. At the same conference, Google announced its new AI-powered Google Cloud Security AI Workbench, which takes advantage of advancements in large language models (LLMs). Also at RSA, SentinelOne announced its AI-powered cybersecurity threat detection platform with enterprise-wide autonomous response that takes advantage of these advancements, while Veracode announced Veracode Fix, which uses generative AI to recommend fixes for security flaws in code.

Tomer Weingarten is co-founder and CEO of SentinelOne, a leading cybersecurity company that counts Hitachi, Samsung, and Politico among its clients. He explains that generative AI can help tackle the biggest problems in cybersecurity now: complexity, in that most people are unaware of the methodology and tools needed to protect and counter cyber-attacks; and the talent shortage created by the high barrier of entry caused by the very high proficiency needed to work in the field.

“AI is super scalable to address all of these issues, and we’ve demonstrated the ability to move away from needing to use complex query languages, complex operations, and reverse engineering to now allow even an entry-level analyst to use a generative AI algorithm that can run automatically behind the scenes to translate in English or other languages to provide insights, and apply an automated action to remediate the issues it surfaces,” Weingarten said. “It’s completely transformative to how you do cybersecurity by taking away the complexity and allowing every analyst to be a super analyst. It’s almost like you’re giving them superpowers to do what they would normally take up to a few days to now do in seconds. It’s a true force multiplier.”

The other big problem in cybersecurity that generative AI tackles, according to Weingarten, is the fact that the cybersecurity industry was built with discrete, siloed products, each designed to tackle a specific aspect of cyber defense. “The true disruption of AI in cybersecurity comes from aggregating all that data into one central repository,” he said. “And then when you apply your AI algorithms on top of that data lake, you can start seeing compounded correlations between all those different elements that go into cybersecurity defense today. In these data intensive problems, AI allows you to become incredibly proficient at finding a needle in a haystack.”

Brian Roche, Chief Product Officer of application security company Veracode, explains the malicious side of AI in cybersecurity. “Hackers are using AI to automate attacks, evade detection systems, and even create malware that can mutate in real-time,” Roche said. “What’s more, Dark Web forums are already full of discussions on how to use generative AI platforms, like ChatGPT, to execute spearphishing and social engineering attacks.”

Roche asserts that AI solutions with a deep-learning model in natural language processing could take a preventive approach to cybersecurity, namely, by sharing suggested fixes for security flaws as developers are writing code. “This would reduce the need for developers to fix these flaws manually somewhere down the software development lifecycle of their application, saving time and resources. When trained on a curated dataset, this type of AI-powered solution would not replace developers, but merely allow them to focus on creating more secure code and leave the tedious yet highly important task of scanning and remediating flaws to automation,” Roche said.

Yet Roche cautions, “organizations need to be careful before committing to an AI solution, as an ill-trained AI model can do as much damage as none at all. AI models are only as good as the data that powers them – the better the data, the more accurate the results.”

Generative AI can allow malicious code to morph, creating a greater threat as it evades detection and traditional cybersecurity defenses. Cybersecurity defenses need to innovate and evolve a step ahead of cybercriminals if they are to remain effective.

To this, Weingarten notes that his theory about the benefit that generative AI brings to cybersecurity in removing complexity and addressing the talent shortage goes both ways. Generative AI can help government adversaries become more scalable and advanced, and it can also remove the barrier for entry for hackers. “AI can help entry- level attackers and adversaries gain capabilities previously reserved only for government grade attackers. Generative AI will supercharge the attack landscape,” Weingarten said. He adds that generative AI can also be used to create a fake video of a national leader providing information that supports an adversary’s nefarious objective, leading to skepticism about the inability to know what is real, what is fake, and whom to trust.

The term “open source” refers to code that is publicly accessible, and which the owner allows anyone the ability to view, use, modify, share, or distribute. It can be argued that open source promotes faster development through collaboration and sharing. As reported by Business Insider writer Hasan Chowdhury, Google senior software engineer Luke Sernau “said open-source engineers were doing things with $100 that ‘we struggle with’ at $10 million, ‘doing so in weeks, not months,’” stating in a recently leaked Google memo that the open source faction is “lapping” Google, OpenAI, and other main technology companies when it comes to generative AI.

Weingarten feels that both open-source and proprietary code have a place. “But at the end of the day, open source and the transparency that comes with it, especially with such a foundational technology, is an imperative ingredient,” he said. “Particularly for more tech-savvy companies, we will leverage open-source algorithms because they can be more predictable for us, we understand how they work, we can train them to what we need.”

Reuben Maher, Chief Operating Officer of cybersecurity and analytics firm Skybrid Solutions, is pragmatic about a holistic cyber approach that incorporates both generative AI and open source. “The convergence of open source code and robust generative AI capabilities has powerful potential in the enterprise cybersecurity domain to provide organizations with strong – and increasingly intelligent – defenses against evolving threats,” said Maher. “On the one hand, generative AI’s ability to predict threats, automate tasks, and enhance threat intelligence is enhanced by the transparency and community support provided by open-source frameworks. It enables much faster enterprise-wide detection and response to vulnerabilities.”

“On the other hand,” continues Maher, “it’s a fine balance. Strong generative AI models can have false positives and false negatives, which makes the decision-making process opaque. Open source code, despite its transparency and cost effectiveness, can leave the system exposed to attackers who might exploit discovered vulnerabilities until community support catches up.” Maher concludes, “these factors require a careful approach and, ultimately, the strategic application of these technologies could be a lynchpin in securing your business in our increasingly connected digital world.”

So what’s the answer? Generative AI is here to stay and offers both risks and rewards to cybersecurity.

Maher suggests that an intelligent response on the cyber threat hunting side, at least proportional to that of the bad actors, is increasingly necessary to maintain pace. “Incorporating LLMs will become more and more common as open source players rapidly build much more sophisticated models that surpass the capabilities of global behemoths like Google, Microsoft, and OpenAI,” Maher said. “Leaders in generative AI cyber solutions will need to increase automation around more transactional tasks while limiting false positives and negatives – all while maintaining the trust of their users due to the increasing data privacy concerns around large volumes of sensitive or personal information.”

Weingarten notes that generative AI has made its widespread debut at a time when geopolitical tensions are high. “Adding a supercharged ingredient like AI to a boiling pot of unstable stew could really create further havoc, so guidelines for responsible use of generative AI are needed. Government regulation is probably the most important factor in all of this, and while there have been some attempts in Europe, we haven’t done this in earnest in the U.S.”

Maher concludes, “Although I understand the ‘pause for ethics’ that some global AI leaders wanted leading technology nations to put on developing generative AI capabilities, I disagree with that strategy as the criminals aren’t bound by our ethics. We simply can’t let criminals using LLMs lead the innovation in this area, resulting in everyone else playing catchup. The bad actors aren’t going to pause while we figure things out – so why should we? The stakes are too high!”

The conversation has been edited and condensed for clarity. Check out my other columns here.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment