Five Key Takeaways from the 2024 RSA Conference

Author: ISACA Now
Date Published: 28 May 2024
Read Time: 6 minutes

Editor’s note: Several ISACA members attended the 2024 RSA Conference earlier this month in San Francisco, California, and shared their top takeaways from the conference on industry trends in security, especially given the accelerated AI risk landscape, with ISACA Now. See their insights below. For more AI resources from ISACA, including information on new ISACA training courses in AI, visit www.isaca.org/ai. For information on upcoming ISACA conferences, including the 2024 GRC Conference and ISACA Conference Europe, visit www.isaca.org/training-and-events/conferences.

Rob Clyde, Past ISACA Board Chair

Even more vendors than last year are touting how they use artificial intelligence in their products. For example, I saw various products that offer a plain English dialogue via generative AI to ask questions about security events or posture, use AI to analyze security events and security posture, or automatically detect files and e-mails with sensitive data. In some cases, the companies are born in AI and have developed cybersecurity solutions from scratch using AI. In most cases, AI is a bolt-on with differing degrees of integration and success. In some cases, it is clear that the claimed use of AI is mostly lip service.

Organizations are looking to move beyond just incident management and toward cyber resilience. For example, even in the face of a persistent ransomware attack the organization’s operations will remain up and running. This would alleviate many of the incidents we have seen with organizations being down for days or weeks at a time. Even though an organization may have the data and systems backed up, it often takes too long to recover them, and victims sometimes pay the ransom in an effort to return to normal operations faster.

Varun Prasad, Senior Manager, BDO USA and ISACA Emerging Trends Working Group Member

The recently concluded RSA conference in San Francisco had a landmark year; as one would expect this was the year AI took center stage and was the focus of virtually every presentation, panel discussion and the products on display at the expo. The themes around which these conversations were centered were the use of AI in cyber-defense techniques to improve our cybersecurity posture and AI governance, risk and security.

One of my biggest takeaways from a panel discussion around AI was that it was important not to get caught up in micro-level risks like AI use policy or specific LLM-focused risks, but rather look at the big picture and tackle the risks posed by the development and use of AI at a broader macro level. The key enterprise-level AI risks could be categorized into three buckets - data risks (data management for training and fine tuning); AI development risks (algorithmic risks, development and deployment of models) and AI operations risks (monitoring for accuracy, bias or drift). This concept really resonated with me as it is crucial for an organization to identify the top risks related to these areas, develop a framework and implement the relevant processes and controls to help meet the organization’s AI-related objectives, and enhance the security and trustworthiness of the AI system.

Additionally, there were other sessions around traditional topics of cloud security, container and application security. The main message was that due to the current geopolitical climate, we are seeing an increase in threat actors. So, it was important to assume “breach mentality” and aggressively manage the attack surface, patch systems and be cognizant of basic social engineering tactics.

Mea Clift, Principal Executive Advisor, Cyber Risk Engineering, Liberty Mutual Insurance

Likely the biggest takeaway from RSA was the ubiquity of “AI.”  From offsite meetings about emerging AI technology, discussions around governance and best practices, to so many of the vendors on the expo floor highlighting how their systems use AI capabilities, it was everywhere. But do we have a full understanding or description of AI? What used to be the stuff of science fiction is now a label applied to everything from statistics and machine learning to neural networks and large language models. In the future will there be a differentiator of all these in tooling and technology? How can we protect these specific types of systems in their infancy, and with different requirements for different types of models, what compromises are going to affect these specifically?

Materiality is the new word on the street regarding incidents and regulatory compliance. What is material, and what isn’t? We seem to be moving away from what was originally only risk quantification in methodologies like FAIR, to a broader understanding of the impact of a cyber event. This is a good thing, identifying that while initially an incident may not seem material, years down the road there could still be impacts. Lawyers and cyber leaders alike discussed the implications of the SEC requirements, and discussions were prevalent about the concerns of cyber leaders for the personal legal ramifications of cyber incidents, the long-term repercussions of a compromise, and the reality of boards needing to take a vested interest in including CISOs in D&O insurance, as well as at the board table. With situations looming that could lead to incarceration or crippling fines to individuals involved in cyber incidents, these discussions were everywhere at the conference, and offsite as well.

Another thing to note was the shrinking of some vendor spaces at the event and more offsite events from vendors. Budgets seem to be shrinking for marketing around cyber products, and that could mean a shrinking of cyber budgets, or just the contraction of cyber service providers, which has been expected to happen for a while now.

Gaelle Koanda, CISA, CISM

My takeaway from the RSA conference was multifaceted and deeply insightful. I was thoroughly impressed by the quality of topics and panelists. The sessions, especially the one on privacy past and present, were incredibly informative, highlighting that data collection has been an issue for decades, thus making privacy a longstanding concern. The CIA triad in 1987 underscored the critical need to address these privacy issues.

The prevalence of AI in discussions was particularly striking, with AI-related topics making up 85% of the sessions. This underscored the growing significance of AI and the urgent need to understand and mitigate its potential threats to privacy. As a newcomer to RSA, this focus on AI was eye-opening and emphasized the importance of staying informed about advancements in AI technology.

Additionally, I was delighted by the diverse representation at the conference, with significant participation from people of color and women. Organizations like WiCyS and Cyversity are making notable strides in promoting diversity and inclusion within the tech industry. 

Wickey Wang, ISACA Emerging Trends Working Group Member

The emerging trend of AI brings great efficiencies while also introducing new security challenges (advanced ransomware, deepfakes, targeted attacks, bots, etc.) The AI in this wave has three focuses: text to text, text to image and text to video. You can see lots of combinations when the text to text focus crosses over each of the key areas of security and compliance such as IAM, security operations, etc., while we also see some companies leverage multiple detection models that cover all perspectives. From the AI layer perspective, most of the products are at the AI application layers while a few also cover the cybersecurity and compliance risks at the data infrastructure and model layers. Connect with me on LinkedIn and we can discuss further.

Additional resources