UK Parliament: Ethics Must Take Center Stage In AI Development

A report by a House of Lords (upper Parliament chamber) committee recognized that AI advancements will not be without risks and, as such, ethics will need to play a vital role in the development of AI.

AI For The Public’s Benefit

The House of Lords report said that the number one priority in development AI should be that the AI is developed for the common good and benefit of humanity. This seems to mirror UK-based DeepMind’s own first AI principle. However, its now sister-company (under the Alphabet group) Google may be of a different opinion, as it seeks to help the U.S. government in the creation of autonomous drones.

Another principle for AI code as established by the UK report is that AI should never have the autonomous power to “hurt, destroy or deceive human beings.” This also seems to be a contrary principle to that of the U.S. government, which is looking to build drones that will decide on their own when and who to kill.

Another important principle is that AI should not diminish the data and privacy rights of individuals, families, or communities. This principle also seems to be in antithesis with how big tech companies have been using AI so far, trying to collect as much user data as possible. The committee report also warned against monopolization of data by big tech companies.

Other two principles resulted from the report say that AI should be intelligible and fair, and that citizens should have the right to be educated and flourish mentally alongside AI.

UK Committee’s Recommendations

In order to avoid data monopolization by big companies, governments will have to encourage more competition for AI solutions. Additionally, big companies will have to be investigated over how they use data. The UK Parliament also believes that liability in case of AI harm (such as self-driving car crashes caused by bad software) is not yet properly defined and that new laws may be needed to fix this.

The UK Parliament also drew attention to the issue of not enough transparency when AI solutions are used. The UK committee believes that people should know when AI was used to make significant or sensitive decisions.

The House of Lords committee warned against biases in AI, too, and recommended that AI specialists are recruited from a diverse background

The committee also said that individuals should have greater control of their data and how it’s used. The way in which data is collected by companies will also need to be changed through legislation, new frameworks, and concepts such as data portability and data trusts.

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • hannibal
    The problem is that if it is possible to make ”Evil” non ethical AI, someone will do it. Because of stupidity or because the maker wants to harm someone...

    Biased AI can produce more money than ethical one and so on...

    What is the method of controlling the making of AIs? Independent UK department. Not likely to happen because AI is the properly of company X or some intelligence Office of big country that has veto rights.

    We allready have seen that Ethics is the last thing the company normally thinks when it is planning their strategies and working habbits.
    Reply
  • Co BIY
    Human intelligence seems to use a lot of thought on how to break the rules without getting caught. Hard to see how AI will be built to avoid that.
    Reply
  • therealduckofdeath
    Google and Alphabet being an ethical company? Have you seen their toxic cesspools they call social networks? If it'll increase their profits they'll make it, evil or not.
    Reply
  • stdragon
    "Three laws of robotics" is counter to interests of the military industrial complex (regardless of nation).

    Skynet was prophetic, and we will all be judged come time. Meanwhile, I'll enjoy using my 1960 Sunbeam toaster; it will never talk back.
    Reply
  • georgex45
    The biggest problem is that most people don't know what ethics is and there is no absolute standard for 'ethics'. A frightening number of politicians and business people are probably included in that but don't worry they'll sort it out.
    Reply