Two of the world’s largest AI companies are warning the public that cybercriminals are currently using AI tools and large language models (LLMs), such as OpenAI’s ChatGPT, to boost their attacks.
Furthermore, Microsoft and OpenAI said they’ve witnessed countries including Russia, North Korea, China and Iran trying to leverage LLMs like ChatGPT to find targets and improve their cyberattacks.
“Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape,” said Microsoft in a new AI security report released Wednesday in partnership with OpenAI.
[Related: AI Guru Explains 7 Hot AI Customer Use Cases In 2024]
OpenAI said in a blog post Wednesday that it builds generative AI and AI tools that improve lives and help solve complex challenges “but we know that malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations.”
ADVERTISEMENT
Here are the five most eye-popping results and statements in Microsoft’s Threat Intelligenceand OpenAI’s new AI cybersecurity report.
No. 1: North Korea, China, Russian And Iran Are Leveraging OpenAI
In partnership with Microsoft, OpenAI said it has disrupted five state-affiliated malicious actors.
This included two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated threat actor known as Forest Blizzard.
“These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors and running basic coding tasks,” said OpenAI.
For example, China’s Charcoal Typhoon used OpenAI services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
Another example is Iran’s Crimson Sandstorm, which used OpenAI for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
No.2: Microsoft And OpenAI Have Not Yet Observed ‘Unique’ AI Attacks
Although cybercriminals are leveraging Microsoft and OpenAI’s artificial intelligence technology to enhance their attackers, the two companies said they haven’t witnessed a unique AI-enabled attack.
“Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI,” said Microsoft.
Furthermore, the AI providers said they’re committed to using generative AI to thwart attackers.
“We are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security, to elevate defenders everywhere,” Microsoft said.
No. 3: Microsoft Unveils New AI Principles
Microsoft Wednesday unveiled new principles that will shape the company’s AI policy and actions to mitigate the risks around its AI tools and APIs used by cybercriminals.
Microsoft said it will “disrupt” the activities of cyberattackers by “disabling the accounts used, terminating services or limiting access to resources.”
When a threat actor uses another service provider’s AI, AI APIs, services or systems, Microsoft will promptly notify the service provider and share relevant data, allowing the service provider to take action.
In addition, Microsoft said it will collaborate with other stakeholders to regularly exchange information about detected threat actors’ use of AI. “This collaboration aims to promote collective, consistent and effective responses to ecosystemwide risks,” Microsoft said.
No. 4: GPT-4 Offers Only ‘Limited’ Capabilities For Hackers
OpenAI’s GPT-4 is the company’s newest LLM that the company said can solve difficult problems with greater accuracy.
OpenAI said the findings in its report are consistent with previous assessments conducted in partnership with external cybersecurity experts, which found “that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”
GPT-4 stands for Generative Pre-trained Transformer 4.
No. 5: OpenAI Is Now Taking A ‘Multipronged Approach’ To AI Safety
In response to these threats, the companies are taking a “multipronged approach” to combatting malicious state-affiliated actors who use their AI products and services.
One key aspect is monitoring and disrupting state-affiliated threats. “Upon detection, OpenAI takes appropriate action to disrupt their activities, such as disabling their accounts, terminating services or limiting access to resources.”
Another approach taken is working together with the AI ecosystem. “OpenAI collaborates with industry partners and other stakeholders to regularly exchange information about malicious state-affiliated actors’ detected use of AI.”
The other two approaches being taken are public transparency and iterating on safety mitigations.
“We take lessons learned from these actors’ abuse and use them to inform our iterative approach to safety,” said OpenAI. “Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future and allows us to continuously evolve our safeguards.”