AI and the changing cyberthreat landscape make data management crucial in 2024
• The cyberthreat landscape will likely be impacted by AI in 2024.
• The question is whether the cyberthreat landscape will be improved or made worse.
• The landscape could become a practical AI stalemate.
AI can impact the cyberthreat landscape in both positive and negative ways. As technology grows its presence in organizations, there needs to be a balanced and proactive approach to ensure the security and resilience of digital systems and society.
There is no denying that AI can be a powerful ally or a formidable adversary, depending on how it is used and by whom. As AI becomes more ubiquitous and influential, organizations need to be aware of its potential and limitations and develop ethical and responsible ways to use it for good.
There are also economic and geopolitical challenges that can impact how businesses use AI. With regulations expected to be a key turning point for AI next year, Dr Joye Purser, CISSP, Ph.D. and field chief information security officer at Veritas Technologies, believes that businesses can expect to see more threat actors feeding disinformation into AI and machine learning technologies causing such tools to misbehave, mislead, and become disruptive through misinformation.
“This is an area that needs great thought into how to provide protection. We also need to adapt our own human behaviors to the vagaries of AI. For now, every output needs human verification – we must ask ourselves, ‘does this look right?’, until we get to the point when we know the output is accurate and can be trusted. Threat actors are also exploiting AI to create more sophisticated forms of attack. Given AI’s dual nature as a force for both good and bad, the question going forward will be whether organizations’ AI protection can outpace hackers’ AI attacks,” commented Dr Purser.
The changing cyberthreat landscape
According to Dr Purser, the cyberthreat landscape continues to intensify, with the last half of 2023 witnessing increases in volume and sophistication of cyberattacks. Dr Purser foresees that novel attack pathways will develop, and also predicts that socially engineered intrusions will increase in number.
“As I travel around the world, I observe that the most significant threats are those that are linked to the location of the victim. Often these attacks will originate from nearby countries with political tensions towards the victim’s nation, exploiting the digital realm to further their national and military objectives,” said Dr Purser.
There has been a tremendous rise in highly organized, well-funded, cyber-gangs that launch successful attacks across a wide variety of victims. Although the attack signature of each cyber-gang may be different, the motivations are either to steal money or to operate as hacktivists promoting political or philosophical views. More specific examples may include the desire to generate finances and money for the gang operation, to undermine authority, or to conduct mercenary operations in cyberspace.
“I expect we will see greater collaboration among more autocratic nations, enabling them to increase the sophistication and volume of attacks. We will also see the increased targeting of developing nations. These nations will accept the trade-off of cost-effective, advanced technology for communications like 5G and ports infrastructure with the high risk of future control of those systems by autocracies that strive to strictly control their citizens,” she added.
But, while the threat actors are becoming more organized, sophisticated, and better funded, Dr Purser also pointed out that the good actors are improving their skills when it comes to cyber resiliency, covering defense, response, and recovery.
In the future, she believes there will be increased collaboration not just between countries but also between businesses and governments, enabling a much more robust defense against cyberthreats. This will require organizations to be more open and communicative – whether they operate in the public or private domain – to improve cybersecurity.
A new era of cybercrime and protection
Andy Ng, vice president and managing director of Asia South and Pacific Region at Veritas Technologies says that in 2024, the first end-to-end AI-powered robo-ransomware attack will usher in a new era of cybercrime pain for organizations, and a brand new cyberthreat landscape to navigate.
Nearly two-thirds of organizations experienced a successful ransomware attack over the past two years in which an attacker gained access to their systems. While startling in its own right, this is even more troubling when paired with recent developments in AI. NG pointed out that tools like WormGPT make it easy for attackers to improve their social engineering with AI-generated phishing emails that are much more convincing than those we’ve previously learned to spot.
“In 2024, cybercriminals will put AI into full play with the first end-to-end AI-driven autonomous ransomware attacks. Beginning with robocall-like automation, eventually, AI will be put to work in identifying targets, executing breaches, extorting victims and then depositing ransoms into attackers’ accounts, all with alarming efficiency and little human interaction,” said Ng.
While adaptive data protection can autonomously fight hackers, Ng believes that given AI’s dual nature as a force for both good and bad, the question going forward will be whether organizations’ AI-powered protection can evolve ahead of hackers’ AI-powered attacks.
“In the current hybrid work model, the growing data sprawl means more vulnerabilities with greater attack surface. Part of that evolution in 2024 will be the emergence of AI-driven adaptive data protection. AI tools will be able to constantly monitor for changes in behavioral patterns to see if users might have been compromised. If the AI tool detects unusual activity, it can respond autonomously to increase the level of protection. For example, initiating more regular backups, sending them to different optimized targets and overall creating a safer environment in defense against bad actors,” explained Ng.
Ng also mentioned the need to have guardrails for generative AI use cases. As generative AI also carries heavy risks, especially with data privacy concerns, organizations that fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are playing a dangerous game with potential detrimental impacts.
“Over the past 12 months, the average organization that experienced a data breach resulting in regulatory noncompliance shelled out more than US$336,000 in fines. Right now, most regulatory bodies are focused on how existing data privacy laws apply to generative AI, but as the technology continues to evolve, expect generative AI-specific legislation in 2024 that applies rules directly to these tools and the data used to train them,” he said.
Preparing for the worst
Given the potential challenges from AI and cyberthreats in 2023, Ng also highlighted three other areas that can impact organizations.
According to Ng, the percentage of data stored in the cloud versus on-premises has steadily grown to the point where it is estimated that 57% of data is now stored in the cloud with 43% on-premises. That growth has come from both mature companies with on-premises foundations making the shift to the cloud, and newer companies building their infrastructure in the cloud from the ground up.
“IDC reports that around 70 to 80% of companies are repatriating at least some data back from the public cloud each year. But both categories of organizations are learning that, for all its benefits, the cloud is not ideally suited for all applications and data. Data security, scalability and the need to comply with the plethora of data sovereignty regulations across different jurisdictions are the key considerations for cloud repatriation. This is leading many companies that made the jump to the cloud to partially repatriate their data and cloud-native companies to supplement their cloud infrastructure with on-premises computing and storage resources. As a result, we’ll see hybrid cloud equilibrium in 2024 —for every organization that makes the move to the cloud, another will build an on-premises data center,” said Ng.
Ng also highlighted that organizations might start experiencing the repercussions of not hiring CISOs in 2023. This may impact many organizations and could be catastrophic for some.
“The role of chief information security officer (CISO) is often viewed as a poisoned chalice—a lofty position, but one that very often comes with heavy consequences. Recent headlines have highlighted several CISOs who were ultimately held responsible for security breaches, facing employment termination and even litigation. It is no surprise that many organizations struggled to fill vacant CISO roles in 2023. At the same time, data security is the top risk facing organizations globally today—outranking even economic uncertainty and competition—and the risk is rising,” Ng commented.
With phishing and ransomware continuing to be the two major cyberthreats faced by companies, in 2024, the consequences of vacant CISO roles will exact a heavy toll as cybercrime, such as ever-evolving ransomware threats, continue to target unprepared organizations. Ng added that the situation is so critical that 15% of executives and IT leaders think their organizations may not even survive to the end of 2024.
“The potential catastrophic outcomes associated with security breaches should provide an impetus for organizations to hire CISOs — all before it’s too late,” he added.
Lastly, Ng stated that the tool sprawl will force a “one in, one out” approach to enterprise security.
Estimates put the average enterprise security toolset at 60-80 distinct solutions, with some enterprises reaching as many as 140. Too much of a good thing is a bad thing—enterprise security tool sprawl leads to a lack of integration, alert fatigue and management complexity. The end outcome is a weakened security posture, the exact opposite of what was intended.
“Recognizing this paradox, in 2024, many enterprises will hit their maximum capacity, forcing either a “one in, one out” mindset to their enterprise security toolsets or consolidating to more comprehensive integrated solutions that bring together data protection, data governance, and data security capabilities,” Ng concluded.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications