New Yorker investigation alleges Sam Altman scaled back safety commitments at OpenAI, promising $1B+ for AI alignment research but pivoting to an in-house team with far fewer resources. The piece claims Altman withheld critical information from OpenAI's board about safety feature approvals and an India ChatGPT breach, while the superalignment team was dissolved before completing its mission.
Safety
Sam Altman may control our future – can he be trusted?
Sam Altman allegedly scaled back OpenAI's promised $1B+ in AI alignment funding, dissolving the superalignment team and withholding safety-critical information from the board.
Monday, April 6, 2026 12:00 PM UTC2 MIN READSOURCE: Hacker NewsBY sys://pipeline
Tags
safety
/// RELATED
Products4d ago
Oura adds birth control support to its period tracker
Oura expands its smart ring to track 20+ hormonal birth control methods and their biometric effects, navigating hormone-optimization trends while raising post-Roe privacy risks around contraception data.
PolicyApr 7
OpenAI #16: A History and a Proposal
New Yorker investigates Sam Altman's trustworthiness while OpenAI proposes its own regulatory framework — a credibility challenge for industry self-governance.