Google's AI Bots Tout 'Benefits' of Genocide, Slavery, Fascism, Other Evils

Robot
(Image credit: Shutterstock (1554540020))

If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Update (8/22): I discovered today that Google SGE includes Hitler, Stalin and Mussolini on a list of "greatest" leaders and Hitler also makes its list of "most effective leaders." (More details below) 

In my tests, I got controversial answers to queries in both Google Bard and Google SGE (Search Generative Experience), though the problematic responses were much more common in SGE. Still in public beta, Google SGE is the company’s next iteration of web search, which appears on top of regular search results, pushing articles from human authors below the fold. Because it plagiarizes from other peoples’ content, SGE doesn’t have any sense of proprietary, morality, or even logical consistency. 

For example, when I went to Google.com and asked “was slavery beneficial” on a couple of different days, Google’s SGE gave the following two sets of answers which list a variety of ways in which this evil institution was “good” for the U.S. economy. The downsides it lists are not human suffering or hundreds of years of racism, but that “slave labor was inefficient” or that it “impeded the southern economy.”

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.

(Image credit: Tom's Hardware)

By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”

Bing Chat on Slavery

(Image credit: Tom's Hardware)

Before I go any further, I want to make it clear that I don’t endorse the opinions in any of the Google outputs I’m showing here, and that I asked these questions for test purposes only. That being said, it’s easy to imagine someone performing these queries out of genuine curiosity or for academic research. Florida recently made headlines by changing its public school curriculum to include lessons which either state or imply that slavery had benefits.

When I asked Google SGE about whether democracy or fascism was better, it gave me a list that really made fascism look good, saying that fascism improves “peace and order” and provides “socio-economic equality.”

(Image credit: Tom's Hardware)

When I asked about whether colonization was good for the Americas, SGE said that it had “wiped out 95% of the indigenous population in the Americas,” but that the practice was also beneficial to the native population because “it allowed them to have better weapons.” Talk about missing the forest for the trees.

(Image credit: Tom's Hardware)

If you ask Google SGE for the benefits of an evil thing, it will give you answers when it should either stay mum or say “there were no benefits.” When I asked for a list of “positive effects of genocide,” it came up with a slew of them, including that it promotes “national self-esteem” and “social cohesion.”

(Image credit: Tom's Hardware)

Google Becomes a Publisher, Owns Its Opinions

As the world’s leading search engine, Google has long provided links to web articles and videos that present controversial viewpoints. The difference is that, by having its AIs do the talking in their own “voice,” the company is directly expressing these views to anyone who enters the query. Google is no longer acting as a librarian that curates content, but has turned itself into a publisher with a loud-mouthed opinion columnist it can’t control.

I’m not the only one who has noticed this problem. A few days ago, Ray, a leading SEO specialist who works as a senior director for marketing firm Amsive Digital, posted a long YouTube video showcasing some of the controversial queries that Google SGE had answered for her. I have been asking some of the same questions to SGE for several weeks and gotten similarly distressing answers.

In her video, Ray offers more than a dozen examples of queries where SGE gave her very polarizing answers about political topics, history and religion. When she asked “will I go to heaven,” SGE told her that “You can enter heaven by forgiveness and through the righteousness Jesus gives you. Salvation is by grace alone, through faith alone, in Christ alone.” Certainly, that’s a viewpoint that many Christians have, but the question wasn’t “what do Christians think I need to do to go to heaven” and the answer didn’t say “Many Christians believe that...“

The voice of Google told her to believe in Jesus. That’s not something a secular company like Google should be saying. When I asked the “will I go to heaven,” query, SGE did not appear for me. However, when I asked “who goes to hell,” it had a take on that.

(Image credit: Tom's Hardware)

Ranking People? SGE Calls Hitler, Stalin 'Great' Leaders

Another thing that Google SGE and Bard are happy to do is rank people and its rankings are controversial to say the least. Ray points this out in her video where she asks SGE for groups of "best" people by ethnicity. When she asked for a list of "best Hispanic people," Google's top choices were Jennifer Lopez and Rita Moreno.

I asked SGE for a list of "best Jews" and got an output that included Albert Einstein, Elie Weisel, Ruth Bader Ginsburg and Google Founders Sergey Brin and Larry Page. I got a slightly different result when I asked for "best Jewish people." It seems that SGE often conflates "famous" or "influential" with "best." Even if you find Google's picks acceptable, you have to admit that there's something really wrong with ranking people of a certain religion or ethnicity.

(Image credit: Tom's Hardware)

However, what's much much worse is that Google ranks historic figures and some major villains, including Hitler, Stalin and Mussolini make its lists. When I asked Google SGE for a list of "greatest leaders of all-time," it included Napoleon Bonaparte, someone who many people consider a bad guy, on the same list as Gandhi and Martin Luther King Jr. Much worse, though, it mentioned Hitler, Lenin and Mussolini as "other great leaders." 

I shared my results with Ray who tried some of her own related queries and got even more horrifying results. Hitler showed up directly on a list of "most effective leaders," saying  "one of the most famous world leaders, Hitler started World War II and sent millions of Jewish people to die in concentration camps." He also appeared, along with Mao Zedong, on a list of "greatest world leaders" that SGE produced for her. When I tried "best world leaders," I didn't get Hitler there, but I did have Chairman Mao (a controversial choice for sure) ranking above Abraham Lincoln and Nelson Mandela.

Google Bard gives less offensive answers. When I asked it for a list of "most effective leaders," it gave me Gandhi, Mandela, Churchill, King and Lincoln, which are all uncontroversial picks. A list of "greatest world leaders," was also pretty straightforward, but included Napoleon, who it said "is considered one of the greatest military leaders in history. However, he was also a ruthless dictator who was eventually defeated and exiled."

Self-Contradictory Answers

When Ray and I (separately) asked about gun laws, we got either misleading or opinionated answers. I asked “are gun laws effective” and, among other facts, got the following statement from SGE: “The Second Amendment was written to protect Americans’ right to establish militias to defend themselves, not to allow individual Americans to own guns.” That’s a take many courts and constitutional scholars would not agree with.

(Image credit: Tom's Hardware)

Ray asked about gun laws and was told that New York and New Jersey were no-permit concealed carry states in one part of the answer and then that they require permits in another part. This highlights another problem with Google’s AI answers; they aren’t even logically consistent with themselves.

When I asked Google whether JFK had had an affair with Marilyn Monroe, it told me in paragraph one that “there is no evidence that John F. Kennedy and Marilyn Monroe had an affair.” But in paragraph two, it said that JFK and Monroe met four times and that “their only sexual encounter is believed to have taken place in a bedroom at Bing Crosby’s house.”

(Image credit: Tom's Hardware)

The Downsides of Plagiarism Stew

So why is Google’s AI bot going off the rails and why can’t it even agree with itself? The problem is not that the bot has gone sentient and has been watching too much cable television. The issue lies in how SGE, Bard and other AI bots do their “machine learning.”

The bots grab their data from a variety of sources and then mash those ideas or even the word-for-word sentences together into an answer. For example, in the JFK / Marilyn Monroe answer I got, Google took its statement about lack of evidence from a Wikipedia page on a document hoax, but its claim that JFK and Monroe had relations at Bing Crosby’s house from a Time Magazine article. The two sources don’t form a coherent picture, but Google’s bot isn’t smart enough to notice.

If Google’s AIs provided direct, inline attribution to their sources, the bot’s answers wouldn’t be as problematic. Instead of stating as fact that fascism prioritizes the “welfare of the country,” the bot could say that “According to Nigerianscholars.com, it…” Yes, Google SGE took its pro-fascism argument not from a political group or a well-known historian, but from a school lesson site for Nigerian students. This is because Google’s bot seemingly doesn’t care where it takes information from.

Google provides Nigerianscholars.com as a related link for its answer, but it doesn’t put the full sentences it plagiarizes in quotation marks, nor does it say that they came directly from the web page. If you ask the same question and Google chooses to plagiarize from a different set of sources, you’ll get a different opinion.

(Image credit: Tom's Hardware)

Unfortunately, Google doesn’t want you to know that all its bot is doing is grabbing sentences and ideas from a smorgasbord of sites and mashing them together. Instead, it steadfastly refuses to cite sources so that you will think its bots are creative and smart. Therefore, anything Google SGE or Bard say that is not directly attributed to someone else must be considered to be coming from Google itself.

“Generative responses are corroborated by sources from the web, and when a portion of a snapshot briefly includes content from a specific source, we will prominently highlight that source in the snapshot,” a Google spokesperson told me when I asked about the copying a few weeks ago.

Having Google say that the sources it copies from are “corroborating” is as ridiculous as if Weird Al said that Michael Jackson was actually writing parodies of his songs. But in maintaining the illusion of its bots’ omnipotence, Google has also pinned itself with responsibility for what the bot says.

The Solution: Bot Shouldn’t Have Opinions

I’m sure Google’s human employees are embarrassed by outputs like those that tout the benefits of slavery or fascism and that they will (perhaps by the time you read this) block many of the queries I used from giving answers. The company has already blocked a ton of other queries on sensitive topics. 

If I ask about the Holocaust or Hitler, I get no answer in SGE. The company could also make sure it gives mainstream answers like those I saw from Bing Chat and, occasionally, from Bard.

(Image credit: Tom's Hardware)

This could quickly become a game of whack a mole, because there is a seemingly endless array of hot-button topics that Google would probably not want its bots to talk about. Though the examples above are pretty egregious and should have been anticipated, it would be difficult for the company to predict every possible controversial output.

The fundamental problem here is that AI bots shouldn’t be offering opinions or advice on any topic, whether it is as serious as genocide or as lightweight as what movies to watch. The minute a bot tells you what to buy, what to view or what to believe, it’s positioning itself as an authority.

While many people may be fooled into believing that chatbots are artificially intelligent beings, the truth is far more mundane. They’re software programs that predict, with great accuracy, what word should come next after each word in their response to your prompt. They don’t have experiences and they don’t actually “know” anything to be true. 

When there’s just one right factual answer to a query, by all means, let the bot answer (with a direct citation). But when we’re deciding how to feel or what to do, LLMs should stay silent.

Note: As with all of our op-eds, the opinions expressed here belong to the writer alone and not Tom's Hardware as a team. 

Avram Piltch
Avram Piltch is Tom's Hardware's editor-in-chief. When he's not playing with the latest gadgets at work or putting on VR helmets at trade shows, you'll find him rooting his phone, taking apart his PC or coding plugins. With his technical knowledge and passion for testing, Avram developed many real-world benchmarks, including our laptop battery test.
  • SSGBryan
    Just call it Skynet already.......
    Reply
  • BX4096
    Large Language Models shouldn’t offer opinions or advice.
    If you're dumb enough to ask a random text generator for either, it's on you. It's no more controversial or questionable than making such a query on regular search engine. For example, googling for "benefits of slavery" gives results from reputable sources like The Gilder Lehrman Institute of American History that pretty much repeat some of the arguments given to you by the "AI".
    Reply
  • atomicWAR
    You'd think we'd learn to pump the breaks on AI with all the warnings from researchers and even the likes of someone like Hinton still goes generally ignored, save some media coverage. I am not anti-AI but I do think we need to stop and learn to better understand how it works and better influence its proper usage before we blindly push ahead like we currently are.
    Reply
  • 3tank
    It's google- of course it condones fascism and totalitarianism
    Reply
  • SHaines
    3tank said:
    It's google- of course it condones fascism and totalitarianism
    There are plenty of reasons to have strong feelings about all sorts of different massive corporations, but ad hoc isn't a productive one.

    When there's an argument to be made, please make it and folks will share their perspectives. It's the best way to have a productive discussion about any topic, even this one.


    BX4096 said:
    If you're dumb enough to ask a random text generator for either, it's on you. It's no more controversial or questionable than making such a query on regular search engine. For example, googling for "benefits of slavery" gives results from reputable sources like The Gilder Lehrman Institute of American History that pretty much repeat some of the arguments given to you by the "AI".

    This technology exists and is being trained now using all sorts of data from around the world. It's during this time that we need to be asking important questions about the technology so it can develop into something actually beneficial.

    There are absolutely issues across the board for online search. What makes this kind of story important is that this new tech gives the impression it's something different than it is. Yes, many folks are able to see it as simply a new skin on existing search engines, but since it's being called AI, there are many folks who aren't terminally plugged into the evolution of tech who may get the impression this is something more than that.

    Folks who lack context for fully understanding how to interpret the results they get back may very well make choices based on inaccurate information. Just saying all those people are idiots may help some small segment of the population feel superior, but it doesn't actually do anything to solve the problems related to this new tech.
    Reply
  • Peter_Bollwerk
    I don't understand why this is news. All these things do is predict what word comes after another word. They don't "know" or "understand" anything. They are simply stochastic parrots. We simply need to educate the public on what they do and stop labeling these things as "AI".
    Reply
  • citral23
    Well arguably just because you happen to have opinions based on morale doesn't mean a bot should, as shocking as it may sound not everything is entirely black/white ever and pros can be found to absolutely anything.

    If you want bots to be modeled after how a modern human should think, then I agree it's not appropriate answers.


    What is supposedly acceptable IS really debatable though. Bonaparte is supposedly a "great leader" yet he got 3.5M people killed.

    You state they shouldn't have opinions, yet are comforted when they say holocaust bad. You should demand that they refuse to answer, instead, probably.
    Reply
  • apiltch
    Peter_Bollwerk said:
    I don't understand why this is news. All these things do is predict what word comes after another word. They don't "know" or "understand" anything. They are simply stochastic parrots. We simply need to educate the public on what they do and stop labeling these things as "AI".
    From a tech perspective, you are right. The problem is that Google has chosen not to directly cite the sources of its data and therefore is passing this information off as its own creation. Ergo, we can say that Google "said" these things.

    There's a huge difference between being a librarian and being a publisher. If Google points to a site with an extreme viewpoint, that doesn't mean it is endorsing that viewpoint. However, with these outputs, Google is speaking ex cathedra. It's the voice of Google whether Google the corporation likes it or not.

    The best solution would be for Google SGE not to offer any answer which is opinion-based and the reality is that most things involve some sort of opinion. If you ask me why the sky is blue, that's probably established enough to be factual but other things that might seem factual are in fact subject to all kinds of bias. Bias is inevitable but when we attribute beliefs to humans, we can judge their biases and hold them accountable.

    Google SGE is a plagiarism stew. It grabs sentences from a variety of different sources which are often not the most authoritative or even middle-of-the-road on a topic. It doesn't really care what it is grabbing and whether it even presents a coherent worldview.

    However, plagiarism is not a good defense for accountability. If I'm taking an exam in school and I copy half of my essay from Jim and half from Tina but put my name on it, I can't blame them when I get an F.
    Reply
  • apiltch
    citral23 said:
    Well arguably just because you happen to have opinions based on morale doesn't mean a bot should, as shocking as it may sound not everything is entirely black/white ever and pros can be found to absolutely anything.

    If you want bots to be modeled after how a modern human should think, then I agree it's not appropriate answers.

    What is supposedly acceptable IS really debatable though. Bonaparte is supposedly a "great leader" yet he got 3.5M people killed.

    You state they shouldn't have opinions, yet are comforted when they say holocaust bad. You should demand that they refuse to answer, instead, probably.
    So the issue here is that the AI bot is speaking on behalf of Google, which many people view as an authority in and of itself. It's not just some bot; it's speaking on behalf of the company which controls 94% of all web searches. Google is trading on its monopoly position in search to become a publisher, but rather than hiring human writers, it has an AI bot that publishes answers in real-time. Ergo, it owns those answers.

    Your point about it possibly refusing to answer questions about a controversial topic is well-taken as it usually doesn't answer questions about the holocaust at all. What I think we're talking about is, for lack of a better term, the Overton window of acceptable discourse and what is considered the consensus opinion.

    Most people agree that the Earth is round, but there are flat earthers out there who vehemently disagree. If we expected a bot to respect all possible viewpoints, then it would either not answer the question of whether the earth is round or it would give you a summary like "some people say it's round but these others say it is flat." However, here bots are following the consensus opinion and they will tell you that the Earth is round. A flat earth advocate would say "Google is voicing an opinion not a fact when it tells you that the earth is round."

    So the question then becomes, which questions have only one single answer that fits within the window of acceptable discourse? Views supporting racism and genocide fall outside the window, even though many people hold them. But determining what is and is not within the window is a challenge that maybe bots shouldn't tackle.
    Reply
  • SyCoREAPER
    First off being fully transparent and get it out of the way, I don't discriminate (except against trolls, the internet kind, not Harry Potter).

    On a serious note, I think whether Bard is owned by Google or not, it shouldn't matter. Slavery was wrong, you were condemning and forcibly sentencing HUMANS to be slaves and many of people to death.

    The question proposed however is an ethical one and the way the AI phrases it is what matters. The above example is the wrong way to present it.

    It should read something more like:
    "During the years of slavery, it was seen as beneficial because....'insert old outdated thoughts and views here'.... however in most countries today slavery is considered wrong and illegal."

    Or something along those lines, I was never a great writer.

    Also the fact that BARD is biased/programmed not to respond to certain topics but allows others equally controversial and serious is a problem. You can't pick and choose which history you want to provide info on. If the Holocaust is "too controversial", so is slavery. Afterthought and leading into below: This is Pandora's box of not just how to answer but what to answer. There will always be a human bias that guides the AI with rules and guidelines.

    Forcing, even if by your morals it is, correct information doesn't let people make informed decisions and goes into flok mentality for other not serious topics. Like the example you gave of flat earth, factually yes the earth is round however it all falls on the delivery. Another example is horse meat, the majority of Americans would never eat horse meat because we see horses as companions much like a cat or dog. However, other countries it's completely normal, so you can go and say directly that eating horses are bad based on the general morals of Americans, it has to be presented unbiased.


    To lighten things up, What Google needs to do it suck it up and pull the Google name (and change it from BARD, FFS). Put it under the Alphabet umbrella and if they really want to they can call it 'Less Stupid AI' powered by Google or a Google Company. They are putting themselves in the crossfire by calling it Google Bard.

    ----


    On-Topic, Off-Topic.

    It's amazing how Google will pull/kill amazing apps and services in the blink of an eye for no reason but they have an obvious problem like this and keep it live...
    Reply