Civil Rights Group Wants to Ban Feds From Using Facial Recognition

(Image credit: Shutterstock)


Fight for the Future, one of the main activist groups fighting for net neutrality and against mass surveillance in the past few years, launched a new campaign today, asking for a ban on the use of facial recognition software by the federal government. The group called facial recognition technology “unreliable, biased and a threat to basic rights and safety.”

The FTTF is calling on everyone to contact their Congress representatives to ask for a ban of facial recognition use by the federal government.

It argues that like nuclear or bio-weapons, facial recognition technology poses a threat to human society and basic rights that far outweigh any potential benefits. The group called out Silicon Valley companies that have requested light regulation of the technology in an attempt to avoid the debate about whether or not governments and law enforcement should be allowed to use facial recognition technology (although based in Washington, Microsoft and president Brad Smith has also been quite vocal). 

Previous reports have shown that the facial recognition technologies used by law enforcement have a high real-world failure rate in identifying the right person. This could lead to the harassing, arrest, or even deportation of the wrong people.

The activist group also said that law enforcement officers often look into facial recognition databases without warrants. Law enforcement agencies have also begun to share this facial recognition data with private companies, including airlines.

The FTTF said facial recognition software tends to be inaccurate, especially with people of color, women and children, putting these categories at higher risk of harassment, wrongful arrest, or worse.

Facial recognition technology is unlike other surveillance technologies, the group warned: 

“It enables automated and ubiquitous monitoring of an entire population, and it is nearly impossible to avoid. If we don’t stop it from spreading, it will be used not to keep us safe, but to control and oppress us—just as it is already being used in authoritarian states," the announcement said. 

The Electronic Frontier Foundation (EFF) has also supported a ban on the federal use of facial recognition technology since earlier this year. The EFF has made many of the same arguments as the FTTF and also argued that mass deployment of facial recognition by the federal and local governments will deter protests and free speech in general.

Beyond arguments that the technology doesn’t work properly or that the government will inevitably abuse, the civil rights groups have also argued that these facial recognition databases appeal to cybercriminals. The same data could be used for harassment, blackmail, identity fraud and many other nefarious purposes. 

The U.S. federal government doesn’t have the best track record in this regard. In 2015, it suffered the largest data breach in history, which resulted in the leaking of sensitive background checks on millions of federal employees as well as millions of stored fingerprints.

Lucian Armasu
Lucian Armasu is a Contributing Writer for Tom's Hardware US. He covers software news and the issues surrounding privacy and security.
  • 10tacle
    "The FTTF said facial recognition software tends to be inaccurate, especially with people of color, women and children, putting these categories at higher risk of harassment, wrongful arrest, or worse."
    Well that's interesting. So "people of color, women, and children" are more prone to misidentification than white adult males here. I deduce that is the conclusion of the geniuses behind the group complaining here. I actually agree with them that leaving our civil liberties to government programs is asking for trouble, but to say that pattern recognition ONLY has the potential to misidentify those demographics they mention just nulls and voids any credibility they have. Then again I have to remind myself these "civil rights" groups are the first to support and vote for politicians who approve of these ever-increasing 1984 Orwellian technologies.
    Reply
  • USAFRet
    10tacle said:
    "The FTTF said facial recognition software tends to be inaccurate, especially with people of color, women and children, putting these categories at higher risk of harassment, wrongful arrest, or worse."
    Well that's interesting. So "people of color, women, and children" are more prone to misidentification than white adult males here. I deduce that is the conclusion of the geniuses behind the group complaining here. I actually agree with them that leaving our civil liberties to government programs is asking for trouble, but to say that pattern recognition ONLY has the potential to misidentify those demographics they mention just nulls and voids any credibility they have. Then again I have to remind myself these "civil rights" groups are the first to support and vote for politicians who approve of these ever-increasing 1984 Orwellian technologies.
    There have been proven instances of facial recognition screwing up when presented with non-white faces.

    https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.htmlhttps://www.govtech.com/products/Bias-Still-Haunts-Facial-Recognition-Microsoft-Hopes-to-Change-That.htmlhttps://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/#2a5b2a0f713d
    Reply
  • thegriff
    USAFRet said:
    There have been proven instances of facial recognition screwing up when presented with non-white faces.

    https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.htmlhttps://www.govtech.com/products/Bias-Still-Haunts-Facial-Recognition-Microsoft-Hopes-to-Change-That.htmlhttps://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/#2a5b2a0f713d
    But, I believe what I've read, is that the issue has more to do with the data they use (ie photo's/pictures etc) to train the algorithms. They use considerably more data from white people, they just need to use more women/dark skinned people. My questin would be is an Italian (alot are dark skined) going to have same issue?
    Also, most of this is for first check, eventually you would need to compare the faces manually.
    Reply
  • USAFRet
    Regardless of why it happens...it apparently does happen.

    In development, you need to construct and test for a LOT of edge cases.
    If your dev team and testing sample is all white guys, it may work perfectly, because that's all you train it and test it for. You might not even think it would have issues with other demographics.

    They're not doing it on purpose. Simply that they don't even think about it. The thought is foreign.

    This happens a lot with monoculture teams.
    A team of all women might build something they think is great. But when presented with a user that is a male from a completely different culture, he might think it is the dumbest application in the world.
    Or whatever.

    Too often, things are not tested in a wide enough range. Be it facial recognition, tiny buttons on a camera, instructions for assembling furniture, whatever.
    Reply
  • thegriff
    USAFRet said:
    Regardless of why it happens...it apparently does happen.

    In development, you need to construct and test for a LOT of edge cases.
    If your dev team and testing sample is all white guys, it may work perfectly, because that's all you train it and test it for. You might not even think it would have issues with other demographics.

    They're not doing it on purpose. Simply that they don't even think about it. The thought is foreign.

    This happens a lot with monoculture teams.
    A team of all women might build something they think is great. But when presented with a user that is a male from a completely different culture, he might think it is the dumbest application in the world.
    Or whatever.

    Too often, things are not tested in a wide enough range. Be it facial recognition, tiny buttons on a camera, instructions for assembling furniture, whatever.
    Agree, but I believe what these groups were implying is that they were intentionally being racist/sexist etc. At least I've seen those implications in other articles. That's more the issue I have. Being somewhat sloppy with the oveall big picture is an easily solvable issue.
    Reply
  • AllanGH
    10tacle said:
    Then again I have to remind myself these "civil rights" groups are the first to support and vote for politicians who approve of these ever-increasing 1984 Orwellian technologies.
    You, obviously have a political axe to grind. Please grind it on Twitter or Facebook.

    I'm here to get away from that kind of nonsense.
    Reply
  • USAFRet
    thegriff said:
    Agree, but I believe what these groups were implying is that they were intentionally being racist/sexist etc. At least I've seen those implications in other articles. That's more the issue I have. Being somewhat sloppy with the oveall big picture is an easily solvable issue.
    People say or imply all sorts of things, to promote their specific agenda. Whether it is real or not.
    Reply
  • Math Geek
    i laughed when i read a recent article where the author was SHOCKED that all the dmv id pics are handed over to the fed's facial recognition database.

    i wonder why people think a few years ago the various dmv agencies changed how a picture could be taken. no smiling, head covering and all that stuff. it was so the database would be easier to do its thing with the pics when added.

    if you want to get away from the surveillance, you'll have to move to the remote congo or amazon rain forest. might still be a couple years before there are even cameras deep in them jungles...
    Reply