The U.S. will have to decide how openly it wants to allow public access to artificial intelligence (AI), potentially impacting overall data protection policies, after Microsoft revealed state actors from rival nations used the tech to train their operatives.
‘We’re either going to have to decide whether we’re going to keep these things open and easy to access for everybody, which means for bad and good actors, or we’re going to take a different tack,’ Phil Siegel, founder of the AI non-profit Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.
OpenAI, in a blog post written Wednesday, identified five state-affiliated ‘malicious’ actors: Chinese-affiliated Charcoal Typhoon and Salmon Typhoon, Iranian-affiliated Crimson North Korean-affiliated Sandstorm and Russian-affiliated Emerald Sleet and Forest Blizzard.
The post claimed the groups used OpenAI services to ‘query open-source information, translate, find coding errors, and run basic coding tasks.’ The two Chinese-affiliated groups, for example, allegedly translated technical papers, debugged code, generated scripts and looked at how to hide processes in different electronic systems.
In response, OpenAI proposed a multipronged approach to combating such malicious use of the company’s tools, including ‘monitoring and disrupting’ malicious actors through new tech to identify and cut off actors’ activities, greater cooperation with other AI platforms to catch malicious activity and improved public transparency.
‘As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits,’ OpenAI wrote. ‘Although we work to minimize potential misuse by such actors, we will not be able to stop every instance.’
‘By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,’ the company insisted.
Siegel argued that these gestures, while well-meaning, ultimately will not prove effective due to the lack of current infrastructure to allow them to have the necessary impact.
‘We’re going to have to decide whether this is a fully open system … or are we going to have it be like the banking system where there’s all these gates in the system that stop these things from happening,’ Siegel said.
‘I am skeptical because the banks have a whole set of infrastructure and regulations behind them to make these things happen … and we don’t have that yet,’ he explained. ‘We’re thinking about it and working on it, but until that stuff is in place – this isn’t Microsoft’s fault or Open A’s fault or Google’s fault.’
‘We just have to move quickly to make sure that this stuff gets put in place so that they can know how they’re going to implement these types of things,’ he added.
Microsoft, in a separate blog post, argued for some additional measures – namely ‘notification’ for other AI service providers to help flag relevant activity and data so they can immediately act on the same users and processes.
Though ‘complementary defenses,’ Microsoft and OpenAI pledged to protect the valuable AI systems along with assistance from MITRE to develop countermeasures in the ‘evolving landscape of AI-powered cyber operations.’
‘The threat ecosystem over the last several years has revealed a consistent theme of threat actors following trends in technology in parallel with their defender counterparts,’ Microsoft acknowledged.
Siegel suggested that the processes described would only account for some of the activity the malicious actors pursued – again, due to the lack of current systems in place to catch the full array of activity – since the hackers can use spycraft and even ‘other forms of technology’ to achieve their goals.
‘There’s just work that has to be done, and I’m skeptical that Microsoft’s OpenAI can go and do that on their own without help from the government or other agencies that have already worked on technologies like that,’ Siegel said.
The Department of Homeland Security did not respond to a Fox News Digital request for comment by the time of publication.
The U.S. will have to decide how openly it wants to allow public access to artificial intelligence (AI), potentially impacting overall data protection policies, after Microsoft revealed state actors from rival nations used the tech to train their operatives.
‘We’re either going to have to decide whether we’re going to keep these things open and easy to access for everybody, which means for bad and good actors, or we’re going to take a different tack,’ Phil Siegel, founder of the AI non-profit Center for Advanced Preparedness and Threat Response Simulation, told Fox News Digital.
OpenAI, in a blog post written Wednesday, identified five state-affiliated ‘malicious’ actors: Chinese-affiliated Charcoal Typhoon and Salmon Typhoon, Iranian-affiliated Crimson North Korean-affiliated Sandstorm and Russian-affiliated Emerald Sleet and Forest Blizzard.
The post claimed the groups used OpenAI services to ‘query open-source information, translate, find coding errors, and run basic coding tasks.’ The two Chinese-affiliated groups, for example, allegedly translated technical papers, debugged code, generated scripts and looked at how to hide processes in different electronic systems.
In response, OpenAI proposed a multipronged approach to combating such malicious use of the company’s tools, including ‘monitoring and disrupting’ malicious actors through new tech to identify and cut off actors’ activities, greater cooperation with other AI platforms to catch malicious activity and improved public transparency.
‘As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits,’ OpenAI wrote. ‘Although we work to minimize potential misuse by such actors, we will not be able to stop every instance.’
‘By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,’ the company insisted.
Siegel argued that these gestures, while well-meaning, ultimately will not prove effective due to the lack of current infrastructure to allow them to have the necessary impact.
‘We’re going to have to decide whether this is a fully open system … or are we going to have it be like the banking system where there’s all these gates in the system that stop these things from happening,’ Siegel said.
‘I am skeptical because the banks have a whole set of infrastructure and regulations behind them to make these things happen … and we don’t have that yet,’ he explained. ‘We’re thinking about it and working on it, but until that stuff is in place – this isn’t Microsoft’s fault or Open A’s fault or Google’s fault.’
‘We just have to move quickly to make sure that this stuff gets put in place so that they can know how they’re going to implement these types of things,’ he added.
Microsoft, in a separate blog post, argued for some additional measures – namely ‘notification’ for other AI service providers to help flag relevant activity and data so they can immediately act on the same users and processes.
Though ‘complementary defenses,’ Microsoft and OpenAI pledged to protect the valuable AI systems along with assistance from MITRE to develop countermeasures in the ‘evolving landscape of AI-powered cyber operations.’
‘The threat ecosystem over the last several years has revealed a consistent theme of threat actors following trends in technology in parallel with their defender counterparts,’ Microsoft acknowledged.
Siegel suggested that the processes described would only account for some of the activity the malicious actors pursued – again, due to the lack of current systems in place to catch the full array of activity – since the hackers can use spycraft and even ‘other forms of technology’ to achieve their goals.
‘There’s just work that has to be done, and I’m skeptical that Microsoft’s OpenAI can go and do that on their own without help from the government or other agencies that have already worked on technologies like that,’ Siegel said.
The Department of Homeland Security did not respond to a Fox News Digital request for comment by the time of publication.