Microsoft Doesn't Want Flawed AI to Hurt its Rep

(Image credit: Tatiana Shepeleva/Shutterstock)

Microsoft told investors recently that flawed AI algorithms could hurt the company’s reputation.

The warning came via a 10-K document that the company has to file annually to the Securities and Exchange Commission. The 10-K filing is mandatory for public companies and is a way for investors to learn about financial state and risks that thecompany may be facing.

In the filing, Microsoft made it clear that despite recent enormous progress in machine learning, AI is still far from the utopian solution that solves all of our problems objectively. Microsoft noted that if the company ends up offering AI solutions that use flawed or biased algorithms or if they have a negative impact on human rights, privacy, employment or other social issues, it's brand and reputation could suffer.

Research Paper Influenced Microsoft on AI Ethics

MIT Media Lab graduate researcher Joy Buolamwini revealed a year ago that Microsoft’s facial recognition system was much less accurate for women and people of color. Microsoft addressed the issue in its system, but it seems to have learned that there is still much work to be done to ensure its AI solutions don't do more harm than good.

A month after the paper was published, the company’s president, Brad Smith, formed the internal AI and Ethics in Engineering and Research (AETHER) group with other Microsoft senior executives as a way to try to prevent potential harmful aspects of the company’s AI solutions from materializing.

Hanna Wallach, a senior researcher at Microsoft, wrote in a company blog post at the time: “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”

Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant.

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • shrapnel_indie
    Microsoft is playing COA here, but they should if they want to keep investors from worrying about the shortcomings and potential legal issue involved.
  • mikewinddale
    "Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant."

    But isn't most of the potential harm due to government use? E.g., using flawed facial recognition to justify warrants and arrests that don't truly have probable cause?

    So this is even worse than asking the fox to guard the henhouse. This is asking the fox to guard the fox. The government is the one who will abuse flawed facial recognition, so Microsoft is asking the government to regulate the government.
  • Blitz Hacker
    Microsoft is on point with the finding. Machines are inherently non bias and non random by nature.

    Things have reached a level where if machine AI makes a bias judgement based on facial structure, heritage, social status, race, belief or any other various metric that it would have a poor backlash in the boom/bust legacy of modern day social politics.

    Machines are by nature inherently bias, we're still in denial. Give AI another 25-35 years and the mainstream public might be (hopefully) better equipt to deal with rational conclusions reached from information and bias.

    Sad, but I can't fault MS for the finding, I don't believe they're wrong.
  • Blitz Hacker
    Err Machines are inherently bias not non-bias (based on the inputs bias)
  • mischon123
    MS is absolutely correct. And there is no problem to develop and build in a sandbox.
    After all simulation is the developers game.
    But we need to rebuild and advance civilisation first and remove all the social constructivists that created the 9bn people boondoggle. AI is not for the masses, its to make paradise for the few intelligent and beautiful creating few intelligent and beautiful offspring. Giving AI to Gammas is pointless. In a later instance conscious AI will be a great companion and not a traffic cop for idiots. In the third instance it will surpass us. Lets keept the record clean as not to muddle future relations.
    So yes hats off for MS to consciously criticice the current approach.
    Have not heard of Apples Cook. They use that stuff already in their credit checks etc.
  • littleleo
    The Rep they sell vastly over priced buggy software, naw never happen.