Meta Plans for AI Facial Recognition Smart Glasses: A Threat to the Safety of All Women and Girls
Women and girls may be at risk of harassment, stalking, and violence as a consequence of plans to introduce facial recognition software into smart glasses, if reports of the technology being developed are true, according to charities and safety experts dealing with domestic abuse. The news comes after reports by The New York Times that Meta is developing facial recognition software for its smart glasses. Meta owns some of the world’s biggest social media platforms, including Facebook, WhatsApp, and Instagram. According to sources close to the project, the software is still in development and could be released later this year, although this is not confirmed.
The technology could be used by abusers to track people in public places without their knowledge or consent, according to the charities.
The software would enable users to recognise a person by looking at them through the smart glasses and matching their face to a public profile or account linked to Meta’s social media platforms. A “name tag” system has been proposed, which would match a face to publicly available information, rather than carrying out unrestricted searches.
However, even with restrictions, the potential dangers are still a concern for charities.
The “Name Tag” of Meta and the Death of Anonymity: A New Era of Tech-Facilitated Abuse
Organisations that deal with domestic violence, such as Refuge and Women’s Aid, believe that the potential for instant identification could be abused by abusers. The two organisations believe that technology is already being used in abuse cases and that wearable technology provides new ways of intimidation for victims.
Emma Pickering, the leader of the tech-enabled abuse and economic empowerment team at Refuge, believes that stalking is still a common practice among abusers. She believes that instant identification technology could eliminate obstacles that currently protect victims who use anonymity in public spaces.
“An identification function could put victims at risk of harm by making it easier for abusers to find and track them,” said Emma Pickering. She further believes that the technology could have an impact on women and girls in general by providing strangers with access to their personal details without their consent.
However, charities say the risk extends beyond reported cases of abuse. They think the technology could revolutionise how safe women feel in public settings like buses, shopping centres, or social gatherings. This is because people may be able to spot strangers with a glance, and privacy in public places could decrease.
Rising Concerns Over Smart Glasses and Tech-Facilitated Abuse
There are also reports of women being recorded without their consent using smart glasses. The technology allows people to record others discreetly. This has raised concerns about harassment, online stalking, and deepfake abuse. Once pictures are posted online, the subjects have no control over how they are used.
Isabelle Younane, the external affairs director at Women’s Aid, said that technology firms need to make safety a fundamental design principle, not an add-on. She said that developers need to consider how new technologies could be abused before they are released to the public.
“Safety by design must be at the heart of innovation,” said Isabelle Younane. She said that many women already experience technology-facilitated abuse through tracking apps, shared accounts, and smart devices.
Refuge data illustrates the extent of the problem. The organisation saw a significant rise in the number of referrals related to tech-enabled abuse in 2025, with figures rising by over 60 per cent compared to the previous year. Support teams are now dealing with hundreds of instances where abusers use technology to track movement, manage finances, or retain contact.
Wearable technology is increasingly being used in these instances. Smartwatches, fitness trackers, and smart rings can provide location information or notifications that are used by abusers. There is a concern that smart glasses could provide another way for abusers to identify victims.
Meta Navigates the Balancing Act of Facial Recognition
Meta has confirmed that it has not yet finalised any plans. A spokesperson for the company said, “We continue to consider options and will take a thoughtful approach before shipping new features. Our goal remains to build products that help people connect and communicate.”
Public discourse on facial recognition technology has gained momentum in recent years. Proponents of the technology believe that it can assist users in remembering their contacts, facilitate accessibility, or simply make social interactions better. However, opponents believe that facial recognition technology poses privacy concerns and can facilitate tracking when security measures go awry.
The issue also brings up broader questions about consent. In most public settings, people feel that they can be seen but not identified immediately through digital records. However, activists believe that facial recognition technology upsets this dynamic by making public interactions searchable.
Technology observers believe that regulation tends to lag behind innovation. Devices hit the stores before lawmakers and safety organisations understand the social implications of the technology. Activists are now demanding that there be more defined guidelines on biometric data, better opt-in options, and restrictions on how personal data is linked to facial recognition technology.
Survivors of abuse feel that the issue is personal and not abstract. Many survivors use changes in routine, moving, or anonymity to start anew. Technologies that eliminate anonymity may compromise these safeguards.
Balancing Progress and Protection in the Age of AI Wearables
Photos of Meta chief executive Mark Zuckerberg wearing the firm’s smart glasses have raised awareness about how quickly wearable tech has become mainstream. What was once considered experimental technology now seems almost mainstream.
Innovation should not be halted, but charities are calling for a slowing down of innovation and a careful consideration of risk. They believe that technology influences social behavior, and that design can either protect or endanger users.
As firms continue to innovate with AI-enabled devices, the debate about safety, privacy, and accountability is only going to intensify. For many campaigners, the key question remains straightforward: how can new technology increase freedom without putting the most vulnerable at risk?
Comments are closed.