Tag Archives: Anthropic AI

AI or Just Bots: the Truth about Artificial Intelligence

Americans are becoming increasingly convinced that artificial intelligence is actually thinking like humans do…This fuels narratives about a future in which AI takes over the economy, leading to heightened insecurity for all of us while providing cover for companies that might be laying off workers for other reasons. It leads us to accept as true answers that are frequently made up or incorrect, even when we are repeatedly told that chatbots can’t stop delivering this kind of misinformation…Our cognitive biases developed to help us survive in complex social environments… We have evolved to view linguistic fluency as a proxy for intelligence, and engagement and helpfulness as indicators of trustworthiness. Builders of AI tools lean in to this deliberately. The humanlike qualities of chatbots are a calculated effort by designers and engineers to make AI more useful, but also more compelling and stickier [i.e. addictive]—just like social media.

Microsoft AI chief Mustafa Suleyman… warned that today’s seemingly conscious AIs [consists of a bunch of] highly accelerated information processors. “These systems are not waking up,” he wrote. “They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data.” He recommends a solution: “Developers must actively engineer the illusion of consciousness out of the products.”…

Humans have a tendency to anthropomorphize animals and even inanimate objects, says Ayanna Howard, dean of Ohio State University’s College of Engineering and a robotics….Humans’ trusting nature makes sense for social creatures who must cooperate with members of their own tribe to survive. With AI and robots, however, this same tendency leads us to trust any system that appears to listen, understand and want to help, a phenomenon Howard calls “over-trust.” Today’s AIs are engineered to actively induce us to over-trust them, she adds. They do this by behaving in ways that are friendly and helpful, mimicking us through memory and personalization.

Excerpt from Christopher Mims, Why Even Smart People Believe AI Is Really Thinking, WSJ, Mar. 20, 2026

The Price of Political Obedience: the Yes Men are not Revolting Yet

Co-founder Dario Amodei has made safety and social responsibility central to Anthropic’s approach to AI. Usage restrictions governing its contract with the Pentagon stipulate that its AI cannot be used for domestic mass surveillance or fully autonomous weapons. The Pentagon, which objects to outside limits on what its troops can do, wants unrestricted access for all lawful purposes… If Anthropic was too inflexible, the Pentagon could have simply terminated the contract. But Defense Secretary Pete Hegseth went further, declaring on X that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Such declaration, according to commentators, amounted to “corporate murder”, The “message sent to every investor and corporation in America: do business on our terms, or we will end your business.”

Excerpt from Greg Ip, Anthropic’s Pentagon Battle Matters to Every Business, WSJ, Mar. 13, 2026

If You Play with Fire, You ‘Il Get Burnt: Lessons from Anthropic

Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro on January 3, 2026, highlighting how AI models are gaining traction in the Pentagon. The mission to capture Maduro and his wife included bombing several sites in Caracas in January 2026. Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance. The deployment of Claude occurred through Anthropic’s partnership with data company Palantir, whose tools are commonly used by the Defense Department and federal law enforcement

Excerpt from Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid, WSJ, Feb. 15, 2025

See also Trump orders government to stop using Anthropic in battle over AI use, WSJ Feb. 27, 2026

Can AI Do That? Knowledge Impossible to Copy

Zuckerberg hasn’t had much success in his efforts to hire the field’s biggest stars, including OpenAI’s co-founder Ilya Sutskever and its chief research officer, Mark Chen. Many candidates are happy to take a meeting at Zuckerberg’s homes in Palo Alto and Lake Tahoe. In private, they are comparing gossip and calculating Meta’s chances of winning the AI race.

The handful of researchers who are smartest about AI have built up what one described as “tribal knowledge” that is almost impossible to replicate. Rival researchers have lived in the same group houses in San Francisco, where they discuss papers that might provide clues for achieving the next great breakthrough. 

Excerpt from Ben Coen et al, It’s Known as ‘The List’—and It’s a Secret File of AI Geniuses, WSJ, June 27, 2025