Microsoft Doesn't Want Flawed AI to Hurt its Rep

Credit: Tatiana Shepeleva/ShutterstockCredit: Tatiana Shepeleva/ShutterstockMicrosoft told investors recently that flawed AI algorithms could hurt the company’s reputation.

The warning came via a 10-K document that the company has to file annually to the Securities and Exchange Commission. The 10-K filing is mandatory for public companies and is a way for investors to learn about financial state and risks that thecompany may be facing.

In the filing, Microsoft made it clear that despite recent enormous progress in machine learning, AI is still far from the utopian solution that solves all of our problems objectively. Microsoft noted that if the company ends up offering AI solutions that use flawed or biased algorithms or if they have a negative impact on human rights, privacy, employment or other social issues, it's brand and reputation could suffer.

Research Paper Influenced Microsoft on AI Ethics

MIT Media Lab graduate researcher Joy Buolamwini revealed a year ago that Microsoft’s facial recognition system was much less accurate for women and people of color. Microsoft addressed the issue in its system, but it seems to have learned that there is still much work to be done to ensure its AI solutions don't do more harm than good.

A month after the paper was published, the company’s president, Brad Smith, formed the internal AI and Ethics in Engineering and Research (AETHER) group with other Microsoft senior executives as a way to try to prevent potential harmful aspects of the company’s AI solutions from materializing.

Hanna Wallach, a senior researcher at Microsoft, wrote in a company blog post at the time: “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”

Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant.

Create a new thread in the News comments forum about this subject
5 comments
Comment from the forums
    Your comment
  • shrapnel_indie
    Microsoft is playing COA here, but they should if they want to keep investors from worrying about the shortcomings and potential legal issue involved.
  • mikewinddale
    "Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant."

    But isn't most of the potential harm due to government use? E.g., using flawed facial recognition to justify warrants and arrests that don't truly have probable cause?

    So this is even worse than asking the fox to guard the henhouse. This is asking the fox to guard the fox. The government is the one who will abuse flawed facial recognition, so Microsoft is asking the government to regulate the government.
  • Blitz Hacker
    Microsoft is on point with the finding. Machines are inherently non bias and non random by nature.

    Things have reached a level where if machine AI makes a bias judgement based on facial structure, heritage, social status, race, belief or any other various metric that it would have a poor backlash in the boom/bust legacy of modern day social politics.

    Machines are by nature inherently bias, we're still in denial. Give AI another 25-35 years and the mainstream public might be (hopefully) better equipt to deal with rational conclusions reached from information and bias.

    Sad, but I can't fault MS for the finding, I don't believe they're wrong.