OpenAI and Microsoft stop hackers using ChatGPT to boost operations
- Nation-state hackers turn to ChatGPT to explore new ways to carry out online attacks.
- The hackers linked to governments in Russia, North Korea and Iran explore new ways to carry out online attacks.
- Microsoft and OpenAI are attempting to stop them from doing so.
When ChatGPT was first unveiled, cybercriminals were already finding ways to utilize the generative AI chatbot to improve their tactics. One of the biggest concerns governments and law enforcement agencies have about generative AI is how cybercriminals use these tools to create more sophisticated threats and improve the delivery mechanisms of their threats.
There have been numerous reports of cybercriminals leveraging generative AI tools like ChatGPT. While OpenAI has taken steps to ensure its tools are not used for the wrong reasons, cybercriminals work hard to find a way around restrictions.
According to the report by Microsoft and OpenAI has been in common use by bad actors.
Microsoft is a major financial backer of OpenAI and uses its AI technology, specifically large language models (LLM), to power its own apps and software.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent. On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital,” the report stated.
OpenAI’s services, which include the GPT-4 model, were used for “querying open-source information, translating, finding coding errors, and running basic coding tasks,” the company said in a separate blog post.
“Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated,” OpenAI stated.
The nation-state hackers using ChatGPT
Here’s a breakdown of the five nation-state hackers using ChatGPT in their operations.
- Forest Blizzard -The Russian military intelligence actor linked to GRU Unit 26165 targeted victims of both tactical and strategic interest to the Russian government. Their activities span a variety of sectors including defense, transportation/logistics, government, energy, non-governmental organizations (NGOs), and information technology. Forest Blizzard’s use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations.
- Emerald Sleet (THALLIUM) – The North Korean threat actor has remained highly active throughout 2023. Their recent operations relied on spear-phishing emails to compromise and gather intelligence from prominent individuals. Emerald Sleet’s use of LLMs has supported this activity and involved research into think-tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly-known vulnerabilities, troubleshoot technical issues, and for assistance with using various web technologies.
- Crimson Sandstorm (CURIUM) – An Iranian threat actor thought to be connected to the Islamic Revolutionary Guard Corps (IRGC). Active since at least 2017, Crimson Sandstorm has targeted multiple sectors, including defense, maritime shipping, transportation, healthcare, and technology. Their operations have frequently relied on watering hole attacks and social engineering to deliver custom .NET malware. The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.
- Charcoal Typhoon (CHROMIUM) – A Chinese state-affiliated threat actor with a broad operational scope. The group are known for targeting sectors that include government, higher education, communications infrastructure, oil & gas, and information technology. Their activities have predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, with observed interests extending to institutions and individuals globally who oppose the Chinese state. In recent operations, Charcoal Typhoon explored how LLMs can augment their technical operations. This has consisted of using LLMs to support tooling development, scripting, understanding various commodity cybersecurity tools, and generating content that could be used to social engineer targets.
- Salmon Typhoon (SODIUM) – Another sophisticated Chinese state-affiliated threat actor with a history of targeting US defense contractors, government agencies, and entities in the cryptographic technology sector. This threat actor has demonstrated its capabilities through the deployment of malware, such as Win32/Wkysol, to maintain remote access to compromised systems. With over a decade of operations marked by intermittent periods of dormancy and resurgence, Salmon Typhoon has recently shown renewed activity. Notably, Salmon Typhoon’s interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.
“AI technologies will continue to evolve and be studied by various threat actors. Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community,” stated Microsoft.
Meanwhile, OpenAI will be taking a multi-pronged approach to combating malicious state-affiliated hackers’ use of its platform. This includes monitoring and disrupting malicious state-affiliated actors, working on public transparency and iterating on safety migrations.
“Although we work to minimize potential misuse by such actors, we will not be able to stop every instance. But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,” OpenAI concluded.
READ MORE
- 3 Steps to Successfully Automate Copilot for Microsoft 365 Implementation
- Trustworthy AI – the Promise of Enterprise-Friendly Generative Machine Learning with Dell and NVIDIA
- Strategies for Democratizing GenAI
- The criticality of endpoint management in cybersecurity and operations
- Ethical AI: The renewed importance of safeguarding data and customer privacy in Generative AI applications