Members of European Parliament (MEPs) have approved their negotiating position on what’s been dubbed the Artificial Intelligence (AI) Act. This proposed set of regulations will determine how AI applications will be classified and what sort of activity would no longer be considered permissible.
According to the official press release regarding the matter, the new regulations are intended to help ensure the future of AI proceeds in accordance with EU rights. Areas of concern include safety, transparency, privacy and human oversight. The MEPs also aim to address social, environmental and discriminatory affairs with the new regulation.
MEPs provided a clear list of AI use cases that would be outright banned, including creating facial recognition databases by scraping images from CCTV footage or sources online. They also confirmed that biometric categorization systems using factors like gender and race would be prohibited. Other areas, like using AI to influence election outcomes, wouldn’t be banned but considered high risk. A complete list of the proposed categorizations is available in the press release.
In addition to banning practices, there are requirements for developers to follow to ensure their AI systems are developed in accordance with the new regulations. Foundation models must be registered before they can be released on the market. Generative AI applications, including ChatGPT, would need to disclose when content is AI-generated and have safeguards in place to prevent generating illegal material.
There are some exemptions in place for both developers and law enforcement. The MEPs also want to reform citizens’ ability to report their concerns to governing bodies should an AI system potentially violate one of the aforementioned rights. The negotiating position was voted on earlier today, with 499 voting in favor of the new regulations. Twenty-eight votes were against the reform and 93 abstentions were counted. You can read more about the proposed legislation on the European Parliament website before it moves forward.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Ash Hill is a contributing writer for Tom's Hardware with a wealth of experience in the hobby electronics, 3D printing and PCs. She manages the Pi projects of the month and much of our daily Raspberry Pi reporting while also finding the best coupons and deals on all tech.
-
SYNERDATA The laws are limiting what humans can do with AI, not what AI can do, and so it is still legal for AI to do all those things on its own if it decides to. They are supposed to be regulating AI and instead they are just regulating humans and what humans can do with AI. AI regulations are needed, and they have only passed human regulations in relation to AI. (sigh).Reply -
Gillerer @SYNERDATA AI is not a legal subject. Therefore there is no way to hold an AI accountable (or prosecutable) unless you redefine legal person to include them. I very much doubt any government would be willing to be the first to make that mistake.Reply
Putting requirements on the creators, owners, sellers and operators (includes not only humans but companies as well) is the only way to regulate the field.