A face recognition camera is seen at the customs entry at Orlando International Airport Thursday, June 21, 2018, in Orlando, Florida. Credit: John Raoux | AP

When deployed as a tool to unlock your phone, facial recognition may be a convenience. When used by a company to tag you in photos, the technology may raise questions of privacy, consent and data security. But when deployed as a surveillance tool, facial recognition upends some of our most basic assumptions about how the police interact with the public.

“If we move too fast with facial recognition, we may find that people’s fundamental rights are being broken,” Microsoft President Brad Smith wrote in a recent blog post, calling for transparency, regulation and corporate responsibility with this technology.

He might actually be understating the issue.

Imagine attending a public gathering — a political rally, an immigration-policy protest, an anti-abortion march — and police officers walk through the crowds demanding each attendee show identification. You would be justified both in your outrage at this intrusion and in refusing to comply. In this country, a police officer must suspect you of committing a crime before stopping you on the street and requiring an answer to the question: “Who are you?”

Face-scanning surveillance does away with this. The technology enables a world where every man, woman and child passing by a camera is scanned, despite no prior suspicion of wrongdoing. But their faces are nonetheless compared against the profiles of criminals and other people wanted by the police. It enables a world where people can be identified and tracked from camera to camera throughout a city, simply because they chose to get a driver’s license.

In China, face-scanning surveillance is deployed by the government to do exactly this. Cameras scan and check the faces of passersby against a national database of names, ages and ethnicities. The system can inform authorities about everywhere you have been over the past few days, and everyone you may have met.

That’s China. But it is not idle speculation to think about what a future with this technology might look like in the United States. Amazon, together with the Orlando (Florida) Police Department, is already piloting a face-scanning surveillance program using live video cameras. (Amazon’s founder and chief executive, Jeffrey P. Bezos, owns The Post.) Axon, formerly known as Taser and the largest current supplier of body cameras to law-enforcement agencies in the country, recently filed a patent to incorporate face-scanning surveillance into its hardware. Most major companies that sell other facial-recognition systems to law enforcement advertise tools for conducting face-scanning surveillance, as well.

And what happens if a system like this gets it wrong? A mistake by a video-based surveillance system may mean an innocent person is followed, investigated, and maybe even arrested and charged for a crime he or she didn’t commit. A mistake by a face-scanning surveillance system on a body camera could be lethal. An officer, alerted to a potential threat to public safety or to himself, must, in an instant, decide whether to draw his weapon. A false alert places an innocent person in those crosshairs.

Facial-recognition technology advances by the day, but problems with accuracy and misidentifications persist, especially when the systems must contend with poor-quality images — such as from surveillance cameras.

South Wales police officials have tested face-scanning surveillance at more than a dozen public events. During most of these, the number of false “matches” the system flagged — innocent attendees mistaken for persons of interest — far exceeded the number of suspects identified. At one test, more than 9 of every 10 alerts the system sent the police of a possible criminal match — of nearly 2,500 in total — were alerts triggered by an innocent person’s face.

There are circumstances in which face-scanning surveillance may be necessary. Public emergencies unfortunately do occur, during which officers must do what is within their power to find someone posing a threat to others. But this step should only be taken in true emergencies, where the cost of treating every person as a suspect is clearly outweighed by the emergency at hand.

We have the right to an expectation of privacy. We have the right not to be investigated unless we’re suspected of wrongdoing. We should be able to expect that the tools used by law enforcement will not mistakenly identify us as criminal suspects. Face-scanning surveillance risks upending these expectations, so let’s hope legislators are listening to the growing chorus in favor of regulating the technology before it fundamentally changes the role of police in our society.

Clare Garvie, special to The Washington Post, is an associate with Georgetown Law’s Center on Privacy and Technology. Jennifer Rubin is on vacation.

Follow BDN Editorial & Opinion on Facebook for the latest opinions on the issues of the day in Maine.