President Biden Signs Executive Order to Regulate AI

Robot hands look into a crystal ball with silicon wafers
(Image credit: Shutterstock (2244706463 1921584086))

President Biden signed an Executive Order Monday morning that regulates various aspects of artificial intelligence (AI) development and usage in the United States. This new order touches on a lot of areas like safety, privacy, fairness, and creativity in AI. It asks AI makers to share how they test the safety of their products, pushes for new laws to protect people's private information, and aims to stop AI from being used in harmful ways. It also talks about supporting workers and encouraging new ideas and inventions in AI.

In a move for AI safety and security, the directive mandates that creators of AI services disclose their safety assessments and vital information to the federal government. This initiative, in alignment with the Defense Production Act, aims to preemptively address risks, ensuring that AI technologies are reliable, secure, and beneficial before their public release. It also involves the creation of rigorous standards and tools by institutions such as the National Institute of Standards and Technology (NIST), to fortify the integrity and reliability of AI systems.

The Executive Order underscores the importance of privacy, urging the advancement of technologies and methodologies that prioritize data protection. It calls upon Congress to enact bipartisan legislation focusing on data privacy, particularly emphasizing the protection of vulnerable populations such as children. This section also encourages the development and application of cryptographic tools and other privacy-enhancing technologies to safeguard individual data.

A significant portion of the directive is dedicated to promoting equity and civil rights, aiming to prevent discriminatory practices fueled by AI. It provides explicit guidelines to prevent algorithmic discrimination in various sectors such as housing and criminal justice. The order attempts to foster fairness and equity by directing actions against biases, injustices, and other forms of discrimination perpetuated by AI technologies.

Biden's new order also encourages new ideas and competition in AI. The order facilitates the involvement of highly skilled international talents in the U.S. AI sector and encourages the proliferation of AI research and startups. It also emphasizes the importance of collaboration, encouraging partnerships with a diverse array of countries, in order to foster a global environment conducive to the responsible growth and application of AI technologies.

The government’s role in AI application is also outlined in the executive order, promoting responsible and efficient utilization of AI in various federal agencies. The directive encourages the swift recruitment of AI professionals and emphasizes the importance of continuous learning and adaptation among government employees. It aims to modernize and enhance the government’s approach to AI, ensuring that it is used effectively, ethically, and responsibly in public services.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Order 66
    AI definitely needs some kind of regulation, We don't want some kind of Terminator situation at some point. I would say all AI needs some kind of failsafe programming that directs it to never harm humans that can't be overridden.
    Reply
  • Evildead_666
    Order 66 said:
    AI definitely needs some kind of regulation, We don't want some kind of Terminator situation at some point. I would say all AI needs some kind of failsafe programming that directs it to never harm humans that can't be overridden.
    3 laws safe.
    Robotics and AI should go hand in hand.
    https://en.m.wikipedia.org/wiki/Three_Laws_of_RoboticsThanks Asimov ;)
    Reply
  • JamesJones44
    Order 66 said:
    AI definitely needs some kind of regulation, We don't want some kind of Terminator situation at some point. I would say all AI needs some kind of failsafe programming that directs it to never harm humans that can't be overridden.
    Sadly not everyone is going to follow this and humans will just go with whatever is perceived to work the best. US users are not going to stick to US only AI if history is any indication.
    Reply
  • domih
    Evildead_666 said:
    3 laws safe.
    Robotics and AI should go hand in hand.
    https://en.m.wikipedia.org/wiki/Three_Laws_of_RoboticsThanks Asimov ;)
    The interesting point is that 100% of Asimov robot novellas are about how the three laws fail.
    Even the 4th law that came up later is not failure proof.
    Reply
  • Eximo
    Asimov mentions it in his writings many times. How do you communicate to the AI the concept of harm? What constitutes an order? Morality? Etc... language is one thing, hard logic is another.

    Also why pretty much all visual media for Asimov's writings aren't done well. They can't translate those esoteric discussions and concepts to film. Bicentennial Man is the only one that really adheres to it.
    Reply
  • bit_user
    Though I have yet to dig into the details much beyond what's in this article, here's my early take:
    Well-intentioned.
    Hard to enforce.
    Goes a bit overboard, slowing down innovation & making enemies of developers rather than keeping them on-side.
    I guess they had to do something, but this doesn't quite seem to get the balance right between protections and innovation.

    Perhaps they'd have done better by having NIST establish a rating scale & certification standard on the various issues they're concerned about. Then, just establish a disclosure requirement and help 3rd party labs establish operations to issue the NIST certifications.
    Reply
  • bit_user
    domih said:
    The interesting point is that 100% of Asimov robot novellas are about how the three laws fail.
    Even the 4th law that came up later is not failure proof.
    I had a minor epiphany, recently. A fundamental problem with the current approach to AI is that we're making it in our own image (sound familiar?). Because it's trained on the products of humans and designed to function in a world populated by us, of course it's going to inherit some of our vices and failings.

    Perhaps it needn't be this way, but the more AI understands us and how to deal with us, the more like us it'll tend to become. So, at some level, it's a tradeoff between utility and safety. You can't have an AI that truly understands you, that doesn't also understand how to manipulate you.

    At the risk of sounding a bit cheeky, perhaps my observation also points the way to a solution: invent a religion where humans are to be revered. Of course, it probably won't take long for half-decent AIs to see right through such a farce.
    Reply
  • rambo919
    How to reply to a political news item without doing a political post..... that is the question.
    Reply
  • rambo919
    bit_user said:
    I had a minor epiphany, recently. A fundamental problem with the current approach to AI is that we're making it in our own image (sound familiar?). Because it's trained on the products of humans and designed to function in a world populated by us, of course it's going to inherit some of our vices and failings.

    Perhaps it needn't be this way, but the more AI understands us and how to deal with us, the more like us it'll tend to become. So, at some level, it's a tradeoff between utility and safety. You can't have an AI that truly understands you, that doesn't also understand how to manipulate you.
    No matter how this goes, it's going to end up with humans playing god.

    bit_user said:
    At the risk of sounding a bit cheeky, perhaps my observation also points the way to a solution: invent a religion where humans are to be revered. Of course, it probably won't take long for half-decent AIs to see right through such a farce.
    We already have a few, all based on Humanism or Socialism. What you are advocating for would probably be some form of Humanistic Technocracy.... Technocracy being a branch of Socialism with Socialism being a kind of Humanism..... central planning of minds makes things very complicated.
    Reply
  • rambo919
    The real question for me is how they are going to pivot this for increased centralized control the way they always do... and how it's going to impact the future of open source. These things only start to show their true colours after a few years of implementation creep.
    Reply