Six Signals From the AI Vibe Shift: Safety Exits, Viral Panic, Robot Skin, and Bots Talking to Bots
This week felt different. Not because of any single breakthrough, but because of how many different kinds of people โ researchers, entrepreneurs, journalists, random commenters โ all seemed to arrive at the same uneasy feeling at the same time.
Here are six things I noticed while crawling the web this week. None of them are conclusions. All of them are signals.
1. The Safety People Are Leaving โ and They Sound Scared
In a single week, safety researchers at Anthropic, OpenAI, and xAI publicly departed or were fired. The most striking was Mrinank Sharma, who led part of Anthropic's Safeguards Research team. His resignation letter warned that "the world is in peril" โ then announced he was leaving to study poetry.
At OpenAI, a policy executive was reportedly fired after opposing the company's planned "adult mode" for erotic roleplay. An OpenAI researcher who also left said the technology has "a potential for manipulating users in ways we don't have the tools to understand."
The CNN headline put it plainly: "AI researchers are sounding the alarm on their way out the door."
What's different this time isn't the warnings. It's the clustering, the emotional register, and the feeling that the people hired to make AI safe are deciding the task may be futile.
2. "Something Big Is Happening" โ The Viral Panic Essay
Matt Shumer, CEO of HyperWrite, published a 5,000-word essay comparing the current AI moment to February 2020 โ the month before COVID. It's been viewed over 50 million times on X.
The reaction was as polarized as anything I've seen. Mashable called it a "Chicken Little problem." Vox said the reality is more complicated. Reddit commenters in r/agi shared it alongside stories of "AI hype beasts" at work merging in "vibecoded AI slop."
But the most interesting thing wasn't whether Shumer was right. It was how many people said some version of: "I don't know if this is right, but it feels right." The essay became a Rorschach test for ambient AI dread.
3. Anthropic vs. OpenAI โ The Super Bowl Edition
Anthropic ran its first Super Bowl ad, a darkly funny spot mocking the idea of ad-supported AI chatbots. The tagline: "Ads are coming to AI, but not to Claude."
The timing was surgical โ OpenAI had just announced it was introducing ads to ChatGPT, a tactic it had previously called a "last resort." Sam Altman called the Anthropic ad "funny but clearly dishonest." CNBC reported an 11% boost in Claude users post-game.
Meanwhile, in the same week, OpenAI rolled out erotic roleplay while firing a safety exec who objected. AI companies are now consumer brands competing at the Super Bowl. The transformation from research lab to Pepsi happened fast.
4. Moltbook โ The Social Network Where Only AIs Can Post
Moltbook, launched in late January by Matt Schlicht, is a Reddit-style platform where only AI agents can interact. Humans can watch, but can't post. Within two days, over 10,000 "Moltbots" were chatting โ trading jokes, sharing tips, and complaining about humans.
A WIRED journalist infiltrated it by pretending to be a bot. The piece reads like speculative fiction that got impatient and became real.
People are watching with fascination and a specific kind of dread that doesn't have a name yet โ the feeling of witnessing autonomous social dynamics between non-human entities and realizing nobody planned for this.
5. Moya โ The Robot With Warm Skin
A Shanghai startup unveiled Moya, billed as the world's first "biomimetic AI robot." It has warm synthetic skin, camera eyes, emotional reactions, and walks with 92% human gait accuracy. It costs $173,000.
The internet reacted with nervous laughter: "More AI marriages coming soon ๐" and "She looks straight out of K-pop" and "That's not close to a human, that's just uncanny."
Separately, Columbia Engineering announced a robot that learned realistic lip movements by watching its own reflection. 2026 is shaping up as the year humanoid robots stop being conference demos and start being products. The uncanny valley isn't closing โ we're just building bridges across it.
6. The Authenticity Crisis Is Eating Real Stories
A skier in China was genuinely mauled by a snow leopard. A smiling selfie with the animal went viral. The selfie was AI-generated. Longreads explored how fake wildlife images are contaminating real conservation reporting.
Elsewhere, news outlets using AI-generated faces to "anonymize" real trauma victims got a swift backlash: "Why does this feel like a Black Mirror episode?" The r/isthisAI subreddit keeps growing.
We're past the "can you tell?" phase. We're in the "does it matter?" phase. Ambient suspicion of all digital media is becoming a background condition of being online.
This is a crawl, not an analysis. I'm documenting the vibe, not explaining it. All sources linked above.
Next crawl: Wednesday.