Tag Archives: Claude Anthropic AI and mass surveillance

Illustration of global remote workers connected via AI central cognition hub

How Stealing Does Look Like: Mercor

Training artificial-intelligence models demands massive amounts of fresh data. Mercor, a $10 billion startup, whose clients have included OpenAI, Anthropic and Meta, has been hit with at least seven class-action lawsuits following a third-party data breach. Allegedly, it exposed Mercor contractor information ranging from recorded job interviews to facial biometric data and screenshots of workers’ computers. A class-action suit filed on April 21, 2026 in Northern California (Ananthula versus Meror.io) alleged that Mercor accumulated applicant-vetting data, including background checks, which it shared with partners, in breach of federal regulations.

According to plaintiffs, the company’s practices include monitoring its contractors’ computers and sharing that data with clients, using recorded candidate interviews to train AI models, and training client models on materials potentially owned by other companies…Previously, The Wall Street Journal reported that Mercor sought to buy prior work materials from people on LinkedIn: Those people said they didn’t own the rights to such work. Mercor has been offering to pay $100 each for contractors’ personal-finance documents, such as spreadsheets and PowerPoint presentations, according to postings online. The company has offered $100 for people’s Google Maps histories… As workers’ screenshots are alleged to be included in the breached data, contractors are suing Mercor not only for exposing their own personal information but also the information of their other employers…

Mercor hired 30,000 contractors in 2025. Its competitors include Handshake AI, Micro1 and Surge. Recently, LinkedIn started testing its own AI training marketplace. The testing was earlier reported by Business Insider. Handshake co-founder Garrett Lord recently posted to LinkedIn that his company was looking to purchase codebases, internal databases and more. “We anonymize everything,” he wrote. “The stuff that’s not on the internet is what we need.” …The way [AI companies use contractors] make responsibility for data provenance more ambiguous….“There’s an incentive right now to figure out the rules and regulations after, and to capture as much of the market in the short term first.”

Thitipun Srinarmwong, a plaintiff in the class-action suit filed on April 21, 2016, alleged that project managers and reviewers at Mercor encouraged workers to use real data from their firms, so long as the source was redacted or slightly changed. When Srinarmwong wrote in a way so as to protect confidential information, Mercor reviewers criticized the work as too short and vague, the suit said. David Bevvino-Berv, a Mercor contractor who previously worked at Goldman Sachs, alleges in the same suit that he saw financial models and prompts that he suspected came from workers sharing proprietary information from other companies…Bevvino-Berv, the plaintiff who worked at Goldman Sachs, alleged that the Insightful software he was required to use as an employee of Mercor captured usage of his bank account, health-insurance portals and around 240 other applications. The suit also alleged that Bevvino-Berv wasn’t “clearly informed” that Insightful would capture anything beyond his Mercor-related work.

Excerpts from Katherine Bindley, Workers Sue $10 Billion AI Startup for Collecting and Exposing Personal Data, WSJ, Apr. 22, 2026

In Your Bedroom and In Your Bathroom: META’s Glasses

The META glasses—with chunky frames embedded with cameras and microphones—are the way Zuckerberg imagines AI will be democratized for personal users. Eventually, he wants to offer something akin to god-like superintelligence on demand. The promise of AI is that it will become more and more useful because such devices allow it to see and hear your daily life, gobbling up that information, processing it and using it to inform you about your life. But at what cost to privacy?

In March 2026, Meta was named in a lawsuit that seeks class-action status over concerns that data is being gathered from those glasses in ways that violate users’ privacy. The lawsuit, citing whistleblower complaints, alleges video captured on Meta’s devices are being routed to contractors in Africa to manually view and label the data to train Meta’s AI models. Among the videos in question? “People changing clothes, using the bathroom, engaging in sexual activity, handing financial information, and conducting other private activities inside their homes that no reasonable consumer would ever expect a stranger to watch,” the lawsuit said. 

Excerpt from Tim Higgins, The Backlash Against AI Devices That Are Always Watching, WSJ, Mar. 14, 2026

AI or Just Bots: the Truth about Artificial Intelligence

Americans are becoming increasingly convinced that artificial intelligence is actually thinking like humans do…This fuels narratives about a future in which AI takes over the economy, leading to heightened insecurity for all of us while providing cover for companies that might be laying off workers for other reasons. It leads us to accept as true answers that are frequently made up or incorrect, even when we are repeatedly told that chatbots can’t stop delivering this kind of misinformation…Our cognitive biases developed to help us survive in complex social environments… We have evolved to view linguistic fluency as a proxy for intelligence, and engagement and helpfulness as indicators of trustworthiness. Builders of AI tools lean in to this deliberately. The humanlike qualities of chatbots are a calculated effort by designers and engineers to make AI more useful, but also more compelling and stickier [i.e. addictive]—just like social media.

Microsoft AI chief Mustafa Suleyman… warned that today’s seemingly conscious AIs [consists of a bunch of] highly accelerated information processors. “These systems are not waking up,” he wrote. “They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data.” He recommends a solution: “Developers must actively engineer the illusion of consciousness out of the products.”…

Humans have a tendency to anthropomorphize animals and even inanimate objects, says Ayanna Howard, dean of Ohio State University’s College of Engineering and a robotics….Humans’ trusting nature makes sense for social creatures who must cooperate with members of their own tribe to survive. With AI and robots, however, this same tendency leads us to trust any system that appears to listen, understand and want to help, a phenomenon Howard calls “over-trust.” Today’s AIs are engineered to actively induce us to over-trust them, she adds. They do this by behaving in ways that are friendly and helpful, mimicking us through memory and personalization.

Excerpt from Christopher Mims, Why Even Smart People Believe AI Is Really Thinking, WSJ, Mar. 20, 2026

If You Play with Fire, You ‘Il Get Burnt: Lessons from Anthropic

Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro on January 3, 2026, highlighting how AI models are gaining traction in the Pentagon. The mission to capture Maduro and his wife included bombing several sites in Caracas in January 2026. Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance. The deployment of Claude occurred through Anthropic’s partnership with data company Palantir, whose tools are commonly used by the Defense Department and federal law enforcement

Excerpt from Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid, WSJ, Feb. 15, 2025

See also Trump orders government to stop using Anthropic in battle over AI use, WSJ Feb. 27, 2026