Skip to main content

Microsoft Doesn't Want Flawed AI to Hurt its Rep

(Image credit: Tatiana Shepeleva/Shutterstock)

Microsoft told investors recently that flawed AI algorithms could hurt the company’s reputation.

The warning came via a 10-K document that the company has to file annually to the Securities and Exchange Commission. The 10-K filing is mandatory for public companies and is a way for investors to learn about financial state and risks that thecompany may be facing.

In the filing, Microsoft made it clear that despite recent enormous progress in machine learning, AI is still far from the utopian solution that solves all of our problems objectively. Microsoft noted that if the company ends up offering AI solutions that use flawed or biased algorithms or if they have a negative impact on human rights, privacy, employment or other social issues, it's brand and reputation could suffer.

Research Paper Influenced Microsoft on AI Ethics

MIT Media Lab graduate researcher Joy Buolamwini revealed a year ago that Microsoft’s facial recognition system was much less accurate for women and people of color. Microsoft addressed the issue in its system, but it seems to have learned that there is still much work to be done to ensure its AI solutions don't do more harm than good.

A month after the paper was published, the company’s president, Brad Smith, formed the internal AI and Ethics in Engineering and Research (AETHER) group with other Microsoft senior executives as a way to try to prevent potential harmful aspects of the company’s AI solutions from materializing.

Hanna Wallach, a senior researcher at Microsoft, wrote in a company blog post at the time: “If we are training machine learning systems to mimic decisions made in a biased society, using data generated by that society, then those systems will necessarily reproduce its biases.”

Last year, Microsoft also proposed that facial recognition systems should be regulated by governments because the potential for harm is significant.