Could someone use Facebook ads to influence an election? It seems that someone at least tried to do so with the 2016 presidential election, because the company revealed that roughly 470 accounts suspected to be of Russian origin spent $100,000 on ads from June 2015 to May 2017.
Facebook said many of the ads in question didn't specifically mention the presidential election, and instead "appeared to focus on amplifying divisive social and political messages across the ideological spectrum" ranging from "LGBT matters to race issues to immigration to gun rights." Roughly 25% of these ads were geo-targeted, which means the accounts were pushing these issues to specific groups of people in the U.S.
Those figures are just for the ads Facebook has confidently attributed to Russian groups. The company said it found another $50,000 spent on around 2,200 ads that might be politically motivated. (Though it's worth noting this analysis is less concrete because Facebook used broader parameters in its search.) All told, that means up to $150,000 was spent over the course of two years in an attempt to influence Facebook users.
It makes sense for someone hoping to influence an election to buy Facebook ads. The service is nigh ubiquitous, and it's become a primary source of news and discourse for many people. Disguising malicious ads targeted at specific people could sway anyone who's on the fence about a particular issue—and potentially lead them to support whichever candidate aligns with their new perspective.
Part of the reason why this could be so effective can be traced back to Facebook's ad platform. Advertisers have all the tools they need to show something to people based on their location, interests, and countless other factors. This makes for a good business—marketers love nothing if not targeted ads—but it also means the system is ripe for abuse. The company's financial motivations can have serious political ramifications.
Here's the silver lining: Facebook said it's upping its efforts to identify these inauthentic accounts and Pages. The company said:
Along with these actions, we are exploring several new improvements to our systems for keeping inauthentic accounts and activity off our platform. For example, we are looking at how we can apply the techniques we developed for detecting fake accounts to better detect inauthentic Pages and the ads they may run. We are also experimenting with changes to help us more efficiently detect and stop inauthentic accounts at the time they are being created.
Those efforts are possible because Facebook has oodles of data (uh, that's the technical term, we swear) and the machine learning chops to properly use it. The company made that clear when it announced in August that it was fighting malicious ad "cloaking," which makes harmful ads seem like they're legitimate, with AI. Other efforts, such as identifying "fake news" or hiding spam links, will also help the company fight campaigns like this.
Not that Facebook is alone in this fight. "We have shared our findings with U.S. authorities investigating these issues," the company said, "and we will continue to work with them as necessary." In the meantime, if you see a promoted link or ad for a hot button political issue, just remember that it's being shown to you for a reason. Somebody wants to sway public opinion, and Facebook offers the perfect platform to do just that.