Recently, National Economic Council Director Kevin Hassett revealed that the White House is contemplating issuing an executive order that would regulate and evaluate AI models similar to how the Food and Drug Administration evaluates new food and drugs.
This is a good idea that deserves serious consideration. Here is why.
Frontier models are automating complex, multistep cyberattacks at ‘machine speed.’
There are several major concerns with AI cybersecurity that haven’t been fully addressed.
There is the use of AI to attack a cyber asset (adversarial), and there are attacks on AI tools like chatbots and voicebots that AI can accomplish with amazing speed and cleverness (AI security).
There is the use of AI in phishing attacks, and there are deepfakes. All of these pose grave threats to American businesses and the federal government, with the potential to affect financial information, privacy, personal data, trade secrets, and national security.
The CEO of CrowdStrike recently sounded the alarm on this issue.
We’re seeing an explosion of new threat actors that may not have all the superior skills to figure this out, but they can use generative AI to advance their attacks very quickly and to make them scalable. There’s going to be a greater proliferation of adversaries than we’ve ever seen. And that is just going to grow, probably exponentially.
A recent report by the National Counterintelligence and Security Center highlighted findings from the AI Security Institute showing that frontier models are automating complex, multistep cyberattacks at “machine speed.”
With some models already matching the pace of human experts at a fraction of the cost, and other models and systems completely outpacing humans, the threat is accelerating due to both the expanding expertise of humans and the expanding capabilities of the AI models, as recently announced by Anthropic about its latest models’ ability to find vulnerabilities in “well-tested” systems.
Another report by ReliaQuest described how a new malware strain called “DeepLoad” can use AI-enabled obfuscation to bypass traditional static defenses in enterprise environments.
These kinds of reports are useful, but it is difficult for us mere humans to keep up with the new daily threats. We need a machine-readable database, much like the computer virus databases that have existed for decades.
The great variety of threats that are invented on a daily basis is extremely concerning. While the Open Worldwide Application Security Project AI Top 10 list is a useful start, it is far from what today’s systems need to address emerging threats.
Our federal government must prioritize a framework solution immediately.
The technology industry has databases of cyber threats, but we also need to share information on how to mitigate them. This can be deeply technical and require specialized knowledge, not just of large language models but of other complicated technologies like audio signal processing.
The National Institute of Standards and Technology, a non-regulatory federal agency within the Department of Commerce, has been a leader in providing recommendations for responsible AI; however, it needs greater enforcement authority.
RELATED: The terrifying scale of the data center land-grab
Kyle Grillot/Bloomberg/Getty Images
Governments are usually slow to update anything, as they should be. Legislative branches are even slower. Congress should not be writing detailed technical metrics and methodologies for cybersecurity.
A solution is that Congress should empower a regulatory agency to monitor and enforce AI safety standards. A somewhat similar example is the FDA, which protects public health by ensuring the safety and security of food, drugs, biological products, and medical devices. It regulates products by reviewing research and conducting inspections.
What Congress should do is address the need for an AI cybersecurity framework by statutorily tasking NIST with creating and managing a centralized AI cybersecurity threat database to which all software vendors can (and should) submit new threats.
While NIST would be a great place to centralize communications of the resources, it is the private sector that will provide most of the intelligence around what the threats are and how to mitigate them.
After all, NIST is already mandated to provide similar resources as part of the Secure Software Development Framework under federal cybersecurity policy and Executive Order 14028, and through the National Vulnerability Database.
We need a framework that not only keeps up with attacks, but is ahead of the antagonists in the AI war, no matter who they are or what their intentions may be. A NIST-led national framework would ensure that Americans, businesses, and the federal government can be protected from the lightning-fast, ever-advancing cybersecurity threats.
This article was originally published by RealClearPolitics and made available via RealClearWire.
Read the full article here


