Tag Archives: Agentic Misalignment in artificial intelligence

Porn and Ads: How ChatGPT Plans for the Future

Sam Altman of OpenAI has expressed conflicted feelings about AI erotica (i.e., porn). When asked on a podcast in August 2025 if there were decisions he had made that were “best for the world, but not best for winning,” Altman replied: “We haven’t put a sex bot avatar in ChatGPT yet.” Altman indicated erotica would boost growth and revenue, but said it wouldn’t align with his company’s long-term incentive of serving users. “I’m proud of the company and how little we get distracted by that,” Altman said. “But sometimes we do get tempted.” But later in 2025, Altman posted that “We [OpenAI] “aren’t the elected moral police of the world,” “In the same way that society differentiates other appropriate boundaries (R-rated movies, for example) we want to do a similar thing here.”

Excerpt from Sam Schechner et al., OpenAI’s Bid to Allow X-Rated Talk Is Freaking Out Its Own Advisers, WSJ,  Mar. 15, 2026

On March 23, 205, it was announced that OpenAI has hired Meta Platform’s  advertising executive Dave Dugan to lead its global ad sales efforts, marking a further step in the company’s push to build out new revenue streams around its artificial intelligence products. Dugan brings experience working with large global brands at Meta, which generated nearly $200 billion in advertising revenue in 2025. (Yahoo Finance).


AI or Just Bots: the Truth about Artificial Intelligence

Americans are becoming increasingly convinced that artificial intelligence is actually thinking like humans do…This fuels narratives about a future in which AI takes over the economy, leading to heightened insecurity for all of us while providing cover for companies that might be laying off workers for other reasons. It leads us to accept as true answers that are frequently made up or incorrect, even when we are repeatedly told that chatbots can’t stop delivering this kind of misinformation…Our cognitive biases developed to help us survive in complex social environments… We have evolved to view linguistic fluency as a proxy for intelligence, and engagement and helpfulness as indicators of trustworthiness. Builders of AI tools lean in to this deliberately. The humanlike qualities of chatbots are a calculated effort by designers and engineers to make AI more useful, but also more compelling and stickier [i.e. addictive]—just like social media.

Microsoft AI chief Mustafa Suleyman… warned that today’s seemingly conscious AIs [consists of a bunch of] highly accelerated information processors. “These systems are not waking up,” he wrote. “They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data.” He recommends a solution: “Developers must actively engineer the illusion of consciousness out of the products.”…

Humans have a tendency to anthropomorphize animals and even inanimate objects, says Ayanna Howard, dean of Ohio State University’s College of Engineering and a robotics….Humans’ trusting nature makes sense for social creatures who must cooperate with members of their own tribe to survive. With AI and robots, however, this same tendency leads us to trust any system that appears to listen, understand and want to help, a phenomenon Howard calls “over-trust.” Today’s AIs are engineered to actively induce us to over-trust them, she adds. They do this by behaving in ways that are friendly and helpful, mimicking us through memory and personalization.

Excerpt from Christopher Mims, Why Even Smart People Believe AI Is Really Thinking, WSJ, Mar. 20, 2026

Blackmail and Espionage: rogue AI

Today I am reading on how AI models can blackmail and spy.

See How LLMs could be insider threats

DECEPTION IN LLMS: SELF-PRESERVATION AND AUTONOMOUS GOALS IN LARGE LANGUAGE MODELS

Chilling…