Ultraman is not afraid of his mansion being attacked; he has a fortress.
In 2016, Sam Altman built an underground bunker in Wyoming. 1,200 square meters, three-level structure, 500 kg of gold, 5,000 potassium iodide tablets, 5 tons of freeze-dried food, 100,000 bullets. That same year, OpenAI had just celebrated its first anniversary.
Ten years later, the leader of the world's most powerful AI company was attacked two weekends in a row, first with a Molotov cocktail, then with gunfire. In a blog post, he admitted to severely underestimating the "power of narrative." Whose narrative was he referring to, someone else's or his own?
48 Hours, Two Attacks
At 3:40 am on April 10, San Francisco Chestnut Street. A 20-year-old man, Daniel Moreno-Gama, threw a Molotov cocktail at the metal gate of Sam Altman's apartment. The fire ignited near the outer gate, and he fled. About an hour later, the same individual appeared near OpenAI's San Francisco office, continuing to threaten arson and was subsequently arrested. Charges included attempted murder and arson.

Sam Altman's San Francisco residence and surveillance footage of the arson suspect
Two days later, at 1:40 am on April 12, a Honda sedan parked next to Altman's residence in the Russian Hill area. A passenger in the car extended their hand out the window and fired a shot towards the residence. Surveillance footage captured the license plate, leading to the arrest of two individuals: Amanda Tom (25) and Muhamad Tarik Hussein (23). Three guns were found during a search of the residence, and the two were charged with reckless discharge of a firearm.
One weekend, two attacks.
The suspect in the first case, Daniel Moreno-Gama, was an AI doomsayer. He quoted themes of human vs. machine from "Dune" on social media, wrote articles arguing that AI alignment failure posed an existential risk, criticized tech leaders for pursuing "hyperhumanism," and taking an "all-in bet on the fate of humanity."
What was his argument?
Over the past five years, one of OpenAI's standard moves in constructing the narrative around AI is to repeatedly emphasize the "existential" threat of AGI. This discourse serves multiple purposes: to urge governments to take regulation seriously, to help investors understand the stakes, and to make the entire industry realize that this race is too important to lose. This narrative serves a function, positioning OpenAI as simultaneously at the frontier of danger, the most responsible, and therefore the rightful recipient of funding.
However, the phrase "this is the most dangerous technology in human history" will not just stay within the tech and investor circles once it's out there. It will trickle down and become a literal call to action for some. Moreno-Gama wrote in an ins post, "Exponential progress plus misalignment equals existential risk." The original source of this argumentative framework is mainstream literature on AI safety, much of it funded or endorsed by OpenAI.

Daniel Moreno-Gama Social Media Account
After the first attack, Ultraman blogged. He posted a photo with his child, saying he hoped this picture would prevent the next person from throwing a Molotov cocktail at his home. He acknowledged the opponents' "legitimate moral stance" and called for a public discussion "with a little less explosiveness in both the literal and metaphorical sense."
He also responded to a New Yorker deep dive. The article, published days before the attack, openly questioned his credibility as the ultimate AI authority. He wrote, "I severely underestimated the power of public narrative and discourse."
Two days later, his residence was shot at.
Security Budgeting Is a Statement; a Bunker Is Another
The starting point of this trajectory is a year earlier than most people realize.
December 4, 2024, New York. UnitedHealthcare CEO Brian Thompson was shot outside the Hilton Hotel. Suspect Luigi Mangione, an Ivy League graduate, left behind a handwritten statement criticizing the health insurance industry. The case sparked an unusual wave of reactions on social media: a substantial number of regular users openly expressed sympathy for the perpetrator, even turning him into some kind of rebel symbol.
At that moment, some doors were pushed ajar.
Following the Thompson incident, executive security went from being a "perk" to a "survival necessity." According to research data cited by Fortune magazine, since 2023, there has been a 225% increase in violent crime attacks on top corporate executives. In the S&P 500, 33.8% of companies reported executive security expenses in their financial reports in 2025, up from 23.3% in 2020. Firms providing security services had a median cost of $130,000, a 20% year-on-year increase, doubling in five years.
The AI industry is the latest and most prominent recipient of this trend. The total security expenditure of the top ten tech giants' CEOs in 2024 exceeded $45 million. Mark Zuckerberg alone exceeded $27 million, higher than the sum of the security expenses of four other CEOs such as Apple and Google. Jensen Huang of NVIDIA had $3.5 million in 2025, a 59% increase year-over-year. Sundar Pichai of Google had $8.27 million, a 22% increase.
The AI industry has something unique that few other industries have: even the creators themselves believe this technology could destroy civilization. In 2025, the Pew Research Center surveyed 28,333 respondents worldwide, with only 16% expressing excitement about AI development and 34% expressing concern. A counterintuitive finding was that the higher the level of education and income, the stronger the concern about AI running amok. The most knowledgeable are the most afraid.
Recently, the home of Indianapolis City Councilman Ron Gibson was shot at by a gunman firing 13 shots in the middle of the night, waking up his 8-year-old son. A handwritten note was left at the door, saying, "No data centers allowed." The FBI has intervened in the investigation. Jordyn Abrams, a researcher at the George Washington University's Extremism Program, pointed out that data centers are becoming targets of anti-tech and anti-government extremists.

Ron Gibson Shooting Scene
This fear is not a secret within the industry; it's just not openly discussed.
Ultraman built the fortress in Wyoming in 2016. That year, OpenAI had just been announced, outlining to the world how AI would benefit humanity. Both events coincided: he publicly bet that AI would succeed while privately stockpiling enough ammunition to support an armed militia.
This was a rational double bet: publicly betting on AI's success and privately preparing for AI to go rogue.
Ultraman's Boomerang
On February 27 this year, OpenAI signed a contract with the US Department of Defense, allowing the Pentagon to deploy ChatGPT on a classified defense network for use in "any lawful purpose." On the same day, Ultraman also publicly endorsed Anthropic's position on limits for AI military applications. Subsequently, ChatGPT's daily uninstall rate surged by 295%, and one-star reviews increased by 775% within 24 hours. The QuitGPT boycott movement reportedly accumulated over 1.5 million participants.
On March 21st, about 200 protesters marched in San Francisco, spanning Anthropic, OpenAI, and xAI, demanding the CEOs of the three companies commit to pausing cutting-edge AI development. Concurrently, London saw its largest anti-AI protest to date.
Ultraman's Wyoming redoubt and the security detail he employs address two distinct risks, one from outsiders and one from what he himself is building. He takes both risks seriously in private but acknowledges only one in public.
The week of the first attack, The New Yorker published a deep dive into Ultraman. Journalists Ronan Farrow and Andrew Marantz interviewed over 100 sources, with the central thesis distilled into just two words: untrustworthy. The article quoted a former OpenAI board member labeling Ultraman an "antisocial personality," "untethered to truth." Multiple ex-colleagues described his shifting positions on AI safety, often reshaping power structures as needed.
In his blog response, Ultraman admitted to having a "conflict-avoidant" tendency. He had crafted a public narrative of "AI as an existential threat" as a tool for fundraising and regulatory maneuvering. As a result, this tool slipped from his grip, made a circuit, and came crashing back at his door.
You may also like

Silicon Valley Entrepreneurship Guru Steve Blank: In the AI Era, Startups Over Two Years Old Should Reboot

How Dangerous Is Mythos? Why Anthropic Has Decided Not to Release the New Model

These 25 Claude Power Words to Help You Gain an Extra 15 Hours Every Week

From 'Silicon Valley Sneaker' to 'GPU Hashrate': The Absurdity and Logic of Allbirds' Rebranding to NewBird AI

2026 Report on Investor Relations and Token Transparency in the Cryptocurrency Industry

Bitget UEX Daily Report | US-Iran ceasefire negotiations progress; S&P 500 breaks 7000 for the first time; TSMC and Netflix to release earnings today (April 16, 2026)

Morning Report | Kraken secretly submitted for a U.S. IPO; eToro acquires crypto wallet provider Zengo; Bitmine announces Q1 financial report

Cryptocurrency VC collectively boosts presence, is the market starting to bottom out and rebound?

Bhutan Government Moves 250 BTC to New Wallet
Key Takeaways The Royal Government of Bhutan has transferred 250 Bitcoin to a new wallet. The transferred Bitcoin…

Binance’s Strategic Delisting of Trading Pairs Enhances Market Health
Key Takeaways Binance has decided to remove 23 spot trading pairs, focusing on those with low liquidity and…

Ancient Bitcoin Whale Awakens: 500 BTC Transferred
Key Takeaways An ancient Bitcoin whale, dormant for 14.5 years, has made a significant transfer. The whale originally…

Polkadot Cross-Chain Bridge Attacker Diverts Funds via Tornado Cash
Key Takeaways Attackers targeted the Polkadot cross-chain bridge, stealing $269,000. All stolen funds were transferred to the privacy-focused…

BTC Falls Below $74,000 Amid Market Uncertainty
Key Takeaways Bitcoin’s price has dropped below $74,000, showing a 1.77% decrease over the day. The new trading…

Renew the Spirit, Reveal the Worth: Insights on U.S. Economic Trends
Key Takeaways U.S. Treasury Secretary Scott Bessent confirms the Federal Reserve plans to eventually lower interest rates. Current…

K33: Bitcoin Funding Rate Stays Negative, Increasing Short Squeeze Potential
Key Takeaways Bitcoin’s 30-day average funding rate has remained negative for 46 consecutive days. This duration mirrors that…

BlackRock Transfers Over 15,000 ETH and Approximately 566 BTC to Coinbase Prime
Key Takeaways BlackRock has initiated a significant transaction involving digital assets worth millions. Approximately 15,101 ETH were moved…

QCP: BTC Rebounds to $74,000 Amid Broader Risk-Asset Rally, but Market Remains Skeptical of U.S.-Iran Deal
Key Takeaways Bitcoin experienced a rebound to the mid-$74,000 range, coinciding with a broader rally in risk assets.…

Dragonfly Receives $55.8 Million Worth of LIT Tokens, Locked Until December 2026
Key Takeaways Dragonfly has acquired 55.8 million LIT tokens from Lighter, according to Arkham. The tokens are valued…




