Meta's Smart Glasses Face Federal Class Action Lawsuit: Users Sue After Privacy Breach, What Do Lawyers Think?

動區BlockTempo

In March 2026, two plaintiffs filed a federal class action lawsuit against Meta and Luxottica, alleging that the AI features of Ray-Ban smart glasses recorded video snippets that were sent to human annotators in Kenya for review, violating Meta’s own outward-facing promise that the glasses are “designed for privacy.” A lawyer was blunt: current law is simply not sufficient to address what these companies are doing.
(Backgrounder: Meta smart glasses were exposed for sending users’ private footage—including people showering, sexual activity, credit card numbers… everything—to Kenyan employees to train AI)
(Additional background: A former Meta executive has alleged that Zuckerberg committed crimes tied to appeasement with the Communist Party: secretly building tools for “Facebook Taiwan-Hong Kong speech moderation,” and betraying Facebook users’ privacy)

Table of contents

Toggle

  • Who sues whom, and what are they suing about?
  • Legal responsibility is murky, but public-relations responsibility is clear
  • The law protects people wearing glasses—not the person being recorded
  • After facial recognition is added, the situation escalates to “zero-second identification”

The weapon for filing the complaint comes from Meta’s own copy: “Designed for privacy, built to be controlled by you.” In 2023, when Meta marketed Ray-Ban smart glasses, it printed that line on promotional materials. Three years later, the same line appears in an excerpt quoted in a federal class-action complaint.

Who sues whom, and what are they suing about?

A trove of this month’s reporting cited an investigation jointly published by Sweden’s “Dagens Nyheter” and the “Göteborg Post.” It claims that Meta’s smart-glasses users are sending users’ entire private lives to Kenya. The content that annotators see includes bathroom footage, sexual activity clips, and credit card numbers and financial documents.

Soon after, plaintiffs Gina Bartone and Mateo Canu filed a federal class action lawsuit against Meta Platforms and the eyewear manufacturer Luxottica of America. The complaint’s core allegations: when users enable AI features, the recorded video snippets are not handled by AI models as the ads claim; instead, they are transmitted to human annotators working for Sama, a contractor in Kenya, for review. Throughout the entire process, Meta never clearly told users.

The complaint alleges that the two companies violated federal and multi-state privacy laws.

Legal responsibility is murky, but public-relations responsibility is clear

When privacy and AI lawyer Brian Hall (law firm Stubbs Alderton & Markiles) accepted an interview with Fortune, his first line was a characterization: “That’s horrifying. That’s exactly what all of us would imagine would happen.”

But he immediately pointed out a practical challenge facing the lawsuit: Meta’s terms of service already explicitly state that data annotators can “review” interaction content, either automatically or manually. In other words, once users click to agree to the terms, they have—at least in theory—authorized this workflow.

So legal responsibility falls into a gray zone. But Hall doesn’t think that means Meta can walk away: “This is a public-relations issue. This is the most sensitive information and images.” Having the terms buried in the fine print doesn’t equate to informed consent; whether users were adequately informed is the core dispute the lawsuit is targeting.

The law protects the person wearing the glasses—not the person being recorded

The lawsuit also reveals a more fundamental gap in the system. What smart glasses record is often not just the life of the person wearing them: it’s the faces and actions of everyone around them, who are unaware and have not consented.

Hall lays the problem bare: “Tragically, our privacy laws are not designed to protect bystanders. They’re designed to protect the ability of people wearing glasses to manage their own data.”

Under the current legal framework, there are almost no avenues for the person who’s being recorded to seek relief.

After facial recognition is added, the situation escalates to “zero-second identification”

Hall also proposed a hypothetical scenario that could multiply the scale of this discussion: if Meta were to add real-time facial recognition to Ray-Ban glasses, the existing privacy vulnerabilities would shift from awkward to “dangerous.”

“In court, to recognize someone used to mean looking them up on Facebook, Instagram—now it’s zero seconds, automatic, zero cost,” he said. “You can be sitting in the courtroom and recognize the witness in real time.”

This isn’t a sci-fi scenario. Meta’s AI assistant can already identify objects and scenes in front of the camera; facial recognition is just one step away. Hall’s closing takeaway is that current law is simply not sufficient to deal with what Meta and other social-media companies are doing.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments