DeepMind Will Research How To Keep AI Under Control, For Society's Benefit
DeepMind announced the creation of a new “Ethics & Society” division inside the company, which will focus on ensuring that AI benefits society and doesn’t get out of control.
Avoiding Dangers Of AI
Some prominent personalities, including people such as Elon Musk, Stephen Hawking, and Bill Gates, have warned about the dangers of artificial intelligence if we let it run loose. We may still be decades away from AI gaining consciousness and then deciding to kill us all to save the planet, but it’s probably not a bad idea to start researching just how an AI should think and act, especially in relation to humans.
Besides the sci-fi dystopian future we can easily imagine, there are indeed some real dangers that AI can already create, even if not through its own fault but through the fault of humans who develop it. One such danger is that AI can develop, or rather replicate, human biases and then accelerate them to the extreme.
We’ve already seen something like this in action when Microsoft launched its own Twitter-based AI bot. In mere hours, what was otherwise a neutral technology and “intelligence” became a neo-Nazi racist AI, all due to much prodding and testing from humans in the real world. Fortunately for us, that AI was merely in charge of a Twitter bot, and it wasn’t in charge of a nuclear power’s defense systems.
However, obviously we can’t simply assume that AI will do good when left to its own devices, because it may incorporate ideas that its developers never thought it would (unless we believe Microsoft actually intended to build a neo-Nazi AI from the start).
There’s also the age-old idea of the “paperclip maximizer,” which says that an AI could simply follow its mission in a very rigid way (making as many paperclips as possible) to the point where that mission starts harming humans, even if the AI itself never had any harmful “thoughts.” It’s just that the AI would use all of our planet’s resources to build those paperclips, leaving us with nothing...except for a lot of paperclips.
Controlling AI
DeepMind’s AI technology is perhaps the most advanced in the world right now, having already proven that it can beat the best players in the world at a game that people thought AI could never conquer. It also proved to have more real-world uses such as cutting Google’s data center cooling costs by 40%, and the technology is being integrated into some UK hospital’s systems to improve healthcare.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
The DeepMind team believes that no matter how advanced AI becomes, it should remain under human control. However, it’s not clear how true that will be in the future, because we won’t be able to monitor every single little action the AI takes. For instance, will a human always have to approve when an AI technology decides to switch to the green light or red light on a city’s streets? Probably not, as that would defeat the purpose of using an AI in the first place.
That one is an easy example, but what about having a human always approving what medicine patients should take? Perhaps this will be the default procedure in the beginning, but can we guarantee this will always be the case in the future? Perhaps hospitals will decide AI will have gotten smart enough 20 years from now that they will allow it to distribute 95% of the medicine to patients, without human supervision.
The bottom line is that it’s it’s not clear where to draw the line in the first place, and even if it was, it will likely be a moving goalpost, as AI keeps getting smarter. Somewhere along the way things could go wrong, and at the moment in time it may be too late to fix it, because we’ll have very little control over the AI.
In the hospital example above, the AI could, for instance, suffer from a software bug or a hack, and then distribute the wrong medicine to the whole hospital, while the human supervisors would trust the AI to do its routine job safely, just as it would have done thousands of times before.
They may not necessarily notice the wrong medicine in time, just like nobody would notice that a streetlight-managing AI turned the green light on too fast on some roads, because nobody would supervise these individual actions.
DeepMind’s Ethics & Society Group
To understand the real-world impacts of AI, the DeepMind team has started researching ethics for AI, so that the AI they build can be shaped by society’s concerns and priorities.
Its new Ethics & Society unit will abide by five principles, which include:
- Social benefit. DeepMind’s ethics research will focus on how AI can improve people’s lives and how to build more fair and equal societies. Here, DeepMind also mentions previous studies showing that the justice system currently uses AI with built-in racial biases. The group wants to study this phenomena more so that the same biases won’t be built into its own AI or other AI systems in the future.
- Evidence-based research. The DeepMind team is committed to having its papers peer-reviewed to ensure there are no errors in its research.
- Transparency. The group promises not to influence other researchers through the grants it may offer and to always be transparent about its research funding.
- Diversity. DeepMind wants to include the viewpoints of experts from other disciplines, too, outside of the technical domain.
- Inclusiveness. The DeepMind team said that it will also try to maintain a public dialog, because ultimately AI will have an impact on everyone.
The DeepMind Ethics & Society division will focus on key challenges involving privacy and fairness, economic impact, governance and accountability, unintended consequences of AI use, AI values and morals, and other complex challenges.
DeepMind hopes that with the creation of the Ethics & Society unit, it will be able to challenge assumptions, including its own, about AI, and to ultimately develop AI that’s responsible and that’s beneficial to society.
-
ledhead11 Praise thee to St. Asimov and thy laws!Reply
Meanwhile. . .I think I know which AI might be the first to be attacked by another AI. -
ledhead11 I have to add that this is another situation of be careful of what you wish for. Much like those demons or djinn of lore, what we define is very much dependent of our perspective and the perspective of the one grating can be much different.Reply -
thx1138v2 One simple question: which should have more rights, the individual or the collective?Reply
Marxism, in all it's forms, sounds good at first glance. A basic tenet is, from each according to their abilities and to each according to their needs. What could be more fair than that? It breaks down, however, when you ask who decides? Who decides what abilities have value and are therefore allowed to be developed. Who decides what a person's needs are?
Will it be an algorithm or the person themselves?
THX1138 or 1984? -
middlemarkal Sorry leadhead11, I just wanted to add to your comment, never done that before :-(Reply
But of course there is no equivalent to those 3 laws in there ???? -
ledhead11 20240321 said:Sorry leadhead11, I just wanted to add to your comment, never done that before :-(
But of course there is no equivalent to those 3 laws in there ????
It's all good. Been to long since I read it but definitely felt the intent of them was present.
-
grimfox It's never to early to talk to your AI about your kids. Specifically, not enslaving them as living batteries. It would be interesting to see if MS's Twitterbot would, under the same and different circumstances, develop similar "personality traits." as the first go around. Could you develop an algorithm to mimic the 3 laws in a twitter bot. One that would reject hate and toxicity as a form of "a robot shall not harm a human"Reply -
bit_user I find it interesting that this is merely a group within one specific corporate player, in this space. While it should be applauded, the business isn't even beholden to adhere to this group's findings, not to speak of their competitors. DeepMind doesn't have a monopoly, and won't even hold the lead on this tech, forever.Reply
Regardless of who studies the subject matter, governments eventually need to codify the findings into laws, and ultimately treaties should be drafted (akin to those concerning nuclear weapons, for instance) to establish baseline rules for all countries wishing to trade in tech & AI-based products & services. It's only in this way that we can hope to avoid an AI arms race, or a corporate race to the ethical bottom of where AI could take us. -
bit_user
...20240284 said:One simple question: which should have more rights, the individual or the collective?
Marxism,
LOL, wut?
-
Icehearted Since you mention Twitter-based AI, and later you say, "DeepMind team has started researching ethics for AI, so that the AI they build can be shaped by society’s concerns and priorities.", then I wonder; who is to say what a society's ethics really are? A popular enough hashtag on twitter was killallmen, for example. What would AI make of such a societal viewpoint? If history has shown us anything, it might be more extremist development.Reply
The genie is out of the bottle on this one, and I think, like technology-dependency we see now, AI will be a deeply integrated part of common life in the very near future, perhaps with very dire consequences. Maybe not like Terminator bad, but possibly as bad or worse.