A report by a House of Lords (upper Parliament chamber) committee recognized that AI advancements will not be without risks and, as such, ethics will need to play a vital role in the development of AI.
AI For The Public’s Benefit
The House of Lords report said that the number one priority in development AI should be that the AI is developed for the common good and benefit of humanity. This seems to mirror UK-based DeepMind’s own first AI principle. However, its now sister-company (under the Alphabet group) Google may be of a different opinion, as it seeks to help the U.S. government in the creation of autonomous drones.
Another principle for AI code as established by the UK report is that AI should never have the autonomous power to “hurt, destroy or deceive human beings.” This also seems to be a contrary principle to that of the U.S. government, which is looking to build drones that will decide on their own when and who to kill.
Another important principle is that AI should not diminish the data and privacy rights of individuals, families, or communities. This principle also seems to be in antithesis with how big tech companies have been using AI so far, trying to collect as much user data as possible. The committee report also warned against monopolization of data by big tech companies.
Other two principles resulted from the report say that AI should be intelligible and fair, and that citizens should have the right to be educated and flourish mentally alongside AI.
UK Committee’s Recommendations
In order to avoid data monopolization by big companies, governments will have to encourage more competition for AI solutions. Additionally, big companies will have to be investigated over how they use data. The UK Parliament also believes that liability in case of AI harm (such as self-driving car crashes caused by bad software) is not yet properly defined and that new laws may be needed to fix this.
The UK Parliament also drew attention to the issue of not enough transparency when AI solutions are used. The UK committee believes that people should know when AI was used to make significant or sensitive decisions.
The House of Lords committee warned against biases in AI, too, and recommended that AI specialists are recruited from a diverse background
The committee also said that individuals should have greater control of their data and how it’s used. The way in which data is collected by companies will also need to be changed through legislation, new frameworks, and concepts such as data portability and data trusts.
Biased AI can produce more money than ethical one and so on...
What is the method of controlling the making of AIs? Independent UK department. Not likely to happen because AI is the properly of company X or some intelligence Office of big country that has veto rights.
We allready have seen that Ethics is the last thing the company normally thinks when it is planning their strategies and working habbits.
Skynet was prophetic, and we will all be judged come time. Meanwhile, I'll enjoy using my 1960 Sunbeam toaster; it will never talk back.