AI researchers discuss risks and potential regulations — suggest putting the brakes on the compute hardware as one approach

AI researchers discuss risks and regulation
(Image credit: Shutterstock)

A combination of researchers from OpenAI and various universities have banded together to release a 104-page PDF document to encourage AI compute regulation by regulating the hardware itself, including the potential application of kill switches where an AI is being used for malicious purposes. The original PDF file was released online by University of Cambridge with a Valentine's Day post.

The PDF, titled "Computer Power and the Governance of Artificial Intelligence", discusses how PC compute power (i.e., GPU power) is leveraged for AI workloads. It then goes on to observe that since AI PC hardware has a high degree of supply chain concentration from just a few vendors, applying regulations to that hardware should be a lot easier.

In the "Risks of Compute Governance and Possible Mitigations" section, researchers detail some potential risks of AI before recommending potential solutions. We'll summarize some key points from this section below.

  • Threats to personal privacy — As a risk that comes with increased AI hardware monitoring and reporting, private matters could be revealed or alluded to by "required reporting from cloud providers on customer usage".
  • Opportunities for leakage of sensitive strategic and commercial information — Building on the above point, sharing enough information with policy makers (including matters that would normally be concealed by an NDA) does make leakage of sensitive information more likely than it would be otherwise. Any policies, then, "must therefore be carefully scoped and implemented with information security in mind".
  • Negative economic impacts — This one is one of the more well-known and controversial aspects of AI: its destabilization of labor markets making use of it. As noted by the paper, past research suggests that the digital economy accounts for 10% of the United States GDP as of 202.
  • Risks from centralization and concentration of power — As observed by the paper, there's a lot of political weight behind AI compute regulation as well. "With increased government control over AI-relevant compute, powerful actors— including corporations— may try to wield the power of the state for their own ends."
  • Low-compute specialized models with dangerous capabilities — Finally, while even current regulations are starting to prevent high-end AI compute (like RTX 4090) being exported to markets like China, these limits won't help much against malicious actors who can work within those constraints. Even protein folding capabilities (which could be used to make pathogens) can be done with a low compute model. The section in the paper concludes, "Regulation of such low-compute models will require other policy approaches."

As far as potential solutions to these problems go, the paper proposes a fairly wide variety of different approaches and concerns that come with them. One of these solutions is a global registry of AI chips and unique identifiers for each, which could help limit smuggling and illegitimate use.

"Kill switches," which could be used to remotely deactivate AI hardware being used for malicious purposes, are also discussed as a possible solution within the paper. Though, solutions like this also pose their own risk, since a cybercriminal gaining control of that kill switch could use it to disable legitimate users. Also, it assumes the AI hardware will be accessible to outside entities, which may not be true.

As the technology and policy around artificial intelligence continues to evolve, time will tell just how power over this supposed new frontier will end up consolidating. It seems that quite a few AI experts, including OpenAI researchers, are hoping more of that power ends up in the hands of regulators, considering the dangers of the alternative.

Freelance News Writer
  • usertests
    AI for big corporations, but not for the peasants.

    These people are living in fear. Don't let them gimp consumer hardware.
    Reply