Adapting Cybersecurity Teams: New Skills for Managing Legacy Systems

Cybersecurity teams require additional skillsets to cope with the rising adoption of generative artificial intelligence (AI) and machine learning. The evolving threat landscape and the need to safeguard the widening attack surface, including legacy systems, further complicates this need.As it is, cybersecurity teams are already struggling to hire enough talent. While the number of cybersecurity professionals in Asia-Pacific grew 11.8% year-on-year to just under 1 million in 2023, the region still needs an additional 2.67 million to; adequately secure digital assets. This demand for talent is driven by the expanding adoption of AI into more processes, driving the need for cloud computing. Globally, the cybersecurity workforce gap currently stands at nearly 4 million and is expected to almost double to hit full capacity.¬†While Singapore has seen a decrease in its cybersecurity workforce gap by 34%, ISC2 projects that another 4,000 professionals are needed to sufficiently protect digital assets in the country. According to the organization, 92% of cybersecurity professionals believe their organization has skills gaps in at least one area, including technical skills such as penetration testing and zero trust implementation. Cloud security and AI and machine learning top the list of skills that companies lack at 35% and 32% respectively.The emergence of generative AI has further complicated the need for cybersecurity professionals to acquire new skills. Cyberattacks are expected to increase in number as they’ve done for years, with the threats themselves remaining the same. It is crucial for security leaders to incorporate prompt engineering training for their team to better understand how generative AI prompts function. Additionally, there is an urgent need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. Researchers have been able to circumvent ethical restrictions and prompt AI tools to write malware, raising concerns about the potential for AI algorithms to be targeted and manipulated by hackers.Also, a wider attack surface, inadequate cybersecurity budget, and the need for more proficiency with API security, are critical challenges that organizations need to address. To meet these requirements, organizations must find the right training and upskilling resources while simplifying tech stacks to ease security management. Misconfiguration and falling behind security patches are among the most common mistakes that can lead to breaches, underscoring the urgency for organizations to support AI use with safeguards and risk management policies. Furthermore, with generative AI being deployed in a security operations center (SOC), cybersecurity professionals can benefit from chatbots providing quick insights into security incidents, lowering the entry level for cybersecurity skills.