Blocking facial recognition surveillance using AI
If Artificial Intelligence (AI) is increasingly able to recognise and classify faces, then perhaps the only way to counter this creeping surveillance is to use another AI to defeat it. We’re in the early years of AI-powered image and face recognition but already researchers at the University of Toronto have come up with a way that this might be possible. The principal at the heart of this technique is adversarial training, in which a neural AI network’s image recognition is disrupted by a second trained to understand how it works.
This makes it possible to apply a filter to an image that alters only a few very specific pixels but makes it much harder for online AI to classify. The theory behind this sounds simple enough, explains the University of Toronto’s professor Parham Aarabi: If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.
The researchers even tested their algorithm against the 300-W face dataset, an industry-standard pool based on 600 faces in a range of lighting conditions. Against this, the University of Toronto system reduced the proportion of faces that could be identified from 100% to between 0.5% and 5%.