Artificial Intelligence, in combination with ubiquitous digital cameras, has opened the door to the widespread use of facial recognition technology. Camera feeds can be checked (in real-time) against databases of known persons and individuals can thus be identified, registered and followed. This technology is deployed by law enforcement around the world (e.g. bodycams identifying suspects or cctv camera’s keeping an eye on public space) as well as by others (e.g. Uber, sports stadiums, museums or public housing authorities). Initially these systems went unchecked as a legal framework was lacking, but several court cases have shown that this technology is under increasing scrutiny from lawmakers, e.g. in the EU, and that societies must decide if and when it can be used.
Concerns over (real-time) facial recognition range from fears of (state) surveillance (when the system works) to, potentially lethal, misidentifications of police suspects (when the system does not work). The latter concern, which also relates to racial bias in identification algorithms, led San Francisco (and other U.S. cities) to ban its use by police forces. A U.K. high court has nevertheless ruled that the use of facial recognition by the (London) police is allowed. In the meantime, all sorts of private uses prevail (and Amazon even offers facial-recognition-as-a-(cloud)-service) and more application are likely to be found. In China, facial recognition is already being used for things as superfluous (yet worrisome) as student attendance.
A full ban on the use of facial recognition would neither be probable nor satisfactory and lawmakers need to find a balance between the pros and cons of the technology. Depending on local norms, and possibly with the “help” of Big Tech, the solution is more likely to be found in a “no, unless” approach in which organizations have to convince regulators that there is a clear need for the technology and, more concretely, in regulation for minimalistic use of the technology; using cameras with the lowest resolution possible and deleting (irrelevant) data and footage. Above all, since future systems will have many more functionalities (e.g. emotion recognition or fine-grained eye-tracking), regulation will have to be adaptive to technological and moral changes.