• Campaign launched to prevent facial recognition software vendors and clients from selling or buying technology that can be weaponized for lethal purposes or other abuse.

     

    December 11, 2018

    Boston, MA – English police stopped a black man in London last July after facial recognition software misidentified him. Police demanded the man’s ID, emptied his pockets and searched him in front of a crowd of onlookers. The man hadn’t committed a crime and wasn’t suspected of any.

     

    Get ready for that public humiliation or worse to happen here. Police departments, airports, schools, businesses and other entities across the United States and around the world are using facial analysis software to identify suspects, track movement and activities, take attendance and advertise products.

     

    But Joy Buolamwini has a plan to preclude people from being stopped and frisked or killed because of faulty software. The founder of the Algorithmic Justice League, and an AI researcher at MIT, aims to prevent facial analysis technology from leading to collateral damage.

     

    “Computer vision uses machine-learning techniques to do facial analysis,” says Buolamwini, named to the recent Bloomberg 50 list for her 2018 accomplishments. “You create a training set with examples of faces. However, if the training sets aren’t diverse, any face that deviates too much from the established norm will be harder to detect, identify, or classify for attributes like age. With the errors, biases and lack of oversight, companies should have more accountability.”

     

    Accountability means that artificial intelligence (AI) vendors and clients commit to not allow the technology to be used for lethal targeting or other abuse, and continually monitor AI for racial, gender, and other harmful bias. In her New York Times op-ed on the dangers of facial analysis technology and during Federal Trade Commision hearings on AI, Buolamwini called for federal regulations.

     

    Now, she is urging public and private organizations including NEC, IBM, Microsoft, Google, Facebook Amazon, Megvii, and Axon to sign the Safe Face Pledge. Three producers of facial analysis software, Robbie.AI, Yoti and Simprints, already have confirmed that they will sign the Safe Face Pledge. The pledge specifically requires them to:

     

    • Show Value for Human Life, Dignity, and Rights
      • Do not contribute to applications that risk human life
      • Do not facilitate secret and discriminatory government surveillance
      • Mitigate law enforcement abuse
      • Ensure your rules are being followed
    • Address Harmful Bias     
      • Implement internal bias evaluation processes and support independent evaluation
      • Submit models on the market for benchmark evaluation where available
    • Facilitate Transparency
      • Increase public awareness of facial analysis technology use
      • Enable external analysis of facial analysis technology on the market
    • Embed Safe Face Pledge into Business Practices
      • Modify legal documents to reflect value for human life, dignity, and rights
      • Engage with stakeholders
      • Provide details of Safe Face Pledge implementation  

     

    “Audits of facial analysis systems show the technology is better at reading male faces than female faces, and more accurately classifies lighter faces than darker faces,” says Buolamwini. “My research at MIT which audited IBM, Microsoft, and Megvii showed error rates as high as 35 percent for classifying dark skinned women.”

     

    In July, the ACLU tested Amazon’s facial analysis software, Rekognition, using photos of every member of the House and Senate. The software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The false matches included Republicans and Democrats of all ages, but were disproportionately people of color. In addition to members of Congress, Rekognition has even been shown to misclassify Oprah Winfrey.

     

    Amazon currently is selling Rekognition to police departments.

     

    On board with the project is the Center on Privacy & Technology at Georgetown Law, a think tank that researches government use of facial recognition technology and its disparate impact on racial and ethnic minorities. “We study police use of face recognition, and all too often we find that this technology is being used with little or no accountability, oversight, and transparency. In many instances the vendors themselves are the best situated to know who is using automated facial analysis tools, for what purposes—and to anticipate and prevent uses that are harmful or irresponsible,” says Laura Moy, executive director of the Center. “We’re pleased that with this pledge, vendors are publicly recognizing that they have an opportunity—and a responsibility—to do the right thing here.” The Center’s 2016 report, The Perpetual Line-Up, outlines how agencies across the country use the technology and offers policy recommendations, including model state and federal legislation.

     

    Among other civil liberties groups and advocates supporting Buolamwini’s effort is Data4BlackLives, a group of activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black people, Noel Sharkey, principle spokesperson for the Campaign to Stop Killer Robots, a coalition of NGOs that is working to ban fully autonomous weapons and thereby retain human control over the use of force, and PolicyLink, a national research and action institute advancing racial and economic equity.

     

    “Research shows facial analysis technology is susceptible to bias and even if accurate can be used in ways that breach civil liberties. Without bans on harmful use cases, regulation, and public oversight, this technology can be readily weaponized, employed in secret government surveillance, and abused in law enforcement,” warns Buolamwini.

     

    Visit www.safefacepledge.org to read more about the project, and www.ajlunited.org to learn more about the Algorithmic Justice League’s research into the social impact of artificial intelligence.

    ×
    Effective Date: Dec 11, 2018
    
    www.SafeFacePledge.org is a project of the Algorithmic Justice League (“AJL,” “we,” or “us”). We care about your privacy, and have explained in this privacy policy (the “Policy”) the data we gather from users and how it is used and disclosed on the www.SafeFacePledge.org  website and other services offered as part of this platform (the “Services”). Please take a look over this information before you use the Services.
    
     
    Information We Collect
    
    
    The following types of information are collected from users as they use the Services:
     
    
    Voluntarily Disclosed Information. When you use the Services, you will have the opportunity to provide us with some information directly. For example, you have the ability to sign up for the pledge or submit a message through the contact form.
    
    Automatically Collected Information. Whenever you interact with Services, we automatically receive and record information from your browser, which may include your IP address, geolocation data, device identification, the type and configuration of your web browser, the last page you visited before accessing the Services, and the page or feature you requested. You may be able to disable our access to some of this information through your browser settings, but this may prevent you from taking advantage of some of the features of the Services.
    
    The Services may also use “cookies.” “Cookies” are identifiers we transfer to your browser or device that allow us to recognize your browser or device and tell us how and when pages and features in our Services are visited. You may be able to change the preferences on your browser or device to prevent or limit your device’s acceptance of cookies, but this may prevent you from taking advantage of some of the features of the Services.
     
    Use of Information
    
    We may use the information collected about you in order to operate, develop, research, modify, and improve the Services. This may include to provide the Services, gather and publicize User Content, use any contact information you provide to communicate with you about your use of the Services, and analyze trends and patterns in our users’ use of the Services.
    
    Disclosure of Information
    
    Some amount of sharing is necessary for AJL to provide these Services, and to comply with the law. The following organizations have access to the information described above:
    
    AJL and Its Contractors. Members of the AJL team, and any independent contractors they retain to work on the Services, may have access to user information as it is reasonably needed for the operation of the Services. 
    
    Third-Party Service Providers. We may employ other companies and individuals to perform tasks on our behalf as part of the operation of these Services. We will share your information with them as is needed for them to provide such assistance. Unless we tell you differently, these third parties do not have any right to use the information we share with them beyond what is necessary to assist us.
     
    Aggregate Information. AJL may share aggregated, non-personally-identifiable data derived from user information for any operational purpose, including for analyzing usage behavior or to help identify new partners, programs, and opportunities.
     
    
    Business Transfers. If AJL is acquired by another entity, goes into dissolution, or otherwise transfers ownership or assets, we may sell or transfer user information as part of that transaction.
     
    Legal Compliance. We may share information if we believe, in our sole discretion, that such disclosure is necessary in order to comply with any court order, subpoena, or other legal process, including an order to respond to any government or regulatory request, or if we believe disclosure is necessary or appropriate to protect the rights, property, or safety of AJL or others.
    
    Changes to this Policy
    
    If we decide to change this Policy, we will post those changes on the www.safefacepledge.org website so users can remain aware of what information we collect, how we use it, and under what circumstances we disclose it. The effective date of the current version will be posted at the top of this Policy. It is your responsibility to check this page periodically for updates.
    
    Contact Us
    
    If you have any questions about this Policy, please contact us at comms@ajlunited.org.