A pair of US Senators from across the aisle on Thursday introduced a bill to limit how facial recognition technology can be used.
Roy Blunt (R-MO) and Brian Schatz (D-HI) have proposed the Commercial Facial Recognition Privacy Act of 2019 as a way to provide people with some measure of privacy protection from face-scanning tech.
"Consumers are increasingly concerned about how their data is being collected and used, including data collected through facial recognition technology," said Senator Blunt in a statement. "That’s why we need guardrails to ensure that, as this technology continues to develop, it is implemented responsibly."
The bill requires disclosure when facial recognition technology is used and information about system capabilities. It prohibits using the technology for unlawful discrimination, using the data for purposes that haven't been disclosed and sharing the data without affirmative consent.
The controller of the data, and any organization processing it, must provide human review to check the software's findings if the decisions of the system could result in financial or material harm to the scanned individual or might be unexpected or offensive to that person.
Facial recognition systems covered by the proposed rules must have an API that allows at least one third-party to conduct tests for accuracy and bias. The rules allow exceptions for systems created for journalistic purposes to identify public figures, for spotting unlawfully copied material from theatrically released films, for personal media management applications, and in emergency situations.
These requirements may change if and when the bill progresses through the legislative process.
Creepy or cool?
The bill arrives after years of hand wringing and debate about facial surveillance, which has become much easier to deploy since Amazon introduced its Rekognition service in November 2016.
Interest in the technology picked up in 2001 following the 9/11 terror attacks. Facebook kicked off the era of facial recognition at scale in 2010 with its Tag Suggestions service, which uses face-based matching to encourage users to tag recognized friends in pictures.
Back in 2012, the FBI co-authored a report that found facial recognition systems were 5 per cent to 10 per cent less accurate on African Americans compared to Caucasians. Since then, advocacy groups have pushed for greater controls but law enforcement agencies have deployed the technology anyway. In 2016, a Government Accountability Office report found that the FBI has very little data on the accuracy of its facial recognition systems and doesn't keep track of error rates.
The availability of facial recognition as a cloud service has made the debate more pressing. Last year, the ACLU challenged Amazon for pitching Rekognition to law enforcement, arguing that automated mass surveillance threatens freedom.
China has deployed the technology widely and it's hard to find technology providers disinterested in selling such systems.
In December last year, Microsoft, which offers its own Face API through the Azure platform and has weathered criticism about providing AI tech to US Immigration and Customs Enforcement, weighed in with a post calling for laws to regulate the technology. A month later, more than 85 advocacy, religious and rights groups urged Amazon, Google, and Microsoft to stop selling the technology to the government. That didn't happen, at least to the extent desired.
Cops told live facial recog needs oversight, rigorous trial design, protections against biasREAD MORE
As calls for regulation have become more difficult to ignore, AWS VP of global public policy Michael Punke last month defended the technology, noting "we have not received a single report of misuse by law enforcement." His ostensible goal was to provide guidance for policymakers, to prevent legislation from stifling the nascent facial surveillance business.
Punke said the technology should not be banned and public dialogue about facial recognition should continue.
With the introduction of the new bill, Microsoft at least is ready to move beyond talk. "Facial recognition technology creates many new benefits for society and should continue to be developed," said Brad Smith, Microsoft president and chief legal officer, in a statement.
"Its use, however, needs to be regulated to protect against acts of bias and discrimination, preserve consumer privacy, and uphold our basic democratic freedoms." ®