
Sam Altman doesn't like the word "no." He doesn't hear it often, and when he does, he tends to treat it less as a boundary than as a starting position. That quality -- call it persistence, call it something else -- has carried the 40-year-old from a modest upbringing in St. Louis to the helm of the most valuable private company on Earth. But as a sweeping new profile in The New Yorker makes clear, the same trait that built OpenAI into a $300 billion juggernaut has left behind a trail of bruised colleagues, strained relationships, and an organization whose internal culture sometimes mirrors the contradictions of the man who leads it.
The profile, written by Tad Friend and published in late April 2025, runs long and reads like a novel -- part character study, part corporate thriller, part warning label. It is the most comprehensive portrait of Altman to date, drawing on interviews with dozens of people in his orbit. And it paints a picture of a leader who is simultaneously brilliant and blind-spotted, generous and transactional, visionary and ruthlessly pragmatic.
Start with the childhood. Altman grew up in a Jewish family in the suburbs of St. Louis. His mother, Connie Gibstine, is a dermatologist. His father, Jerry Altman, was a real-estate broker who died in 2024 at 71. According to The New Yorker, Altman's childhood was marked by early precociousness and a sometimes difficult relationship with his father, who struggled with alcoholism. One detail stands out: Altman reportedly came out as gay at age 16, a fact his father initially responded to with discomfort. The two later reconciled, but friends described the elder Altman as emotionally volatile.
The profile sketches a young man who learned early to read rooms, to manage people, to get what he wanted through a combination of intelligence and social dexterity. At Stanford, he dropped out after two years -- echoing a Silicon Valley cliché -- to co-found Loopt, a location-sharing app. The company sold to Green Dot Corporation in 2012 for $43.4 million. Not a blockbuster exit. But it got Altman noticed by Paul Graham, the co-founder of Y Combinator, who eventually tapped him to run the prestigious startup accelerator in 2014.
That appointment raised eyebrows. Altman was 28.
His tenure at YC was polarizing. Some alumni describe him as transformational -- someone who expanded the program's ambition and brought in bigger bets. Others, as The New Yorker documents, found him distracted, self-interested, and increasingly drawn to his own ventures rather than the startups he was supposed to nurture. By the time he left in 2019 to run OpenAI full-time, the organization he'd co-founded with Elon Musk in 2015 was still a nonprofit research lab with modest commercial ambitions. That would change fast.
The speed of OpenAI's transformation from research nonprofit to the most talked-about company in technology is itself a kind of miracle -- or, depending on your perspective, a cautionary tale about what happens when a charismatic founder encounters a technology with seemingly unlimited commercial potential. ChatGPT launched in November 2022 and reached 100 million users within two months. By early 2025, OpenAI had raised over $40 billion in funding and was generating annualized revenue north of $5 billion. Microsoft had committed $13 billion. SoftBank led a $40 billion round at a $300 billion valuation.
Those numbers are staggering. They are also, in some ways, the easy part of the story.
The harder part is what happened inside the company along the way. The New Yorker profile offers granular new detail about the now-infamous boardroom coup of November 2023, when OpenAI's board of directors fired Altman without warning. The official explanation at the time was cryptic: Altman had not been "consistently candid" with the board. What followed was five days of corporate chaos -- employees threatening mass resignations, Microsoft offering to absorb the entire staff, and ultimately Altman's triumphant reinstatement with a reconstituted board more aligned with his vision.
But the profile adds texture to what "not consistently candid" actually meant. According to The New Yorker, board members had grown concerned about Altman's pattern of withholding information, selectively sharing details to different stakeholders, and creating what one former colleague described as an environment where no one ever had the full picture. Helen Toner, a former board member, told the publication that Altman had a habit of telling people different things -- not outright lies, necessarily, but carefully curated truths designed to advance his position. Another former board member, Tasha McCauley, described a pattern of behavior she found deeply troubling.
Altman, for his part, has consistently denied any deception. In The New Yorker piece, he acknowledged making mistakes but framed the board's actions as a misunderstanding rooted in poor communication and structural dysfunction. His allies -- and there are many -- argue that the board was simply out of its depth, a group of academics and policy wonks who didn't understand the pressures of running a company scaling at breakneck speed.
That framing has largely won the day. The new board is stacked with corporate heavyweights. OpenAI is converting from its unusual capped-profit structure to a more conventional for-profit model. And Altman is more powerful than ever.
But the profile raises uncomfortable questions about what that power looks like in practice. Several former employees described a culture of intense personal loyalty to Altman, one where dissent was tolerated in theory but punished in practice. People who pushed back on decisions found themselves marginalized. Those who left were sometimes subject to restrictive non-disparagement agreements -- a practice that drew public criticism after former employees spoke out about it. OpenAI eventually said it would not enforce those clauses, though as Business Insider noted, the damage to the company's reputation among the AI research community was already done.
The departures are worth cataloguing. Ilya Sutskever, the co-founder and chief scientist who helped instigate the board coup, left in May 2024 to start his own AI safety company. Mira Murati, the former CTO who briefly served as interim CEO during the crisis, departed in September 2024. Jan Leike, who co-led OpenAI's superalignment team focused on long-term AI safety, resigned publicly and accused the company of prioritizing "shiny products" over safety research. Greg Brockman, co-founder and longtime president, went on extended leave and eventually departed. The list goes on.
Each departure had its own context. But collectively, they tell a story of an organization in constant internal tension -- between safety and speed, between research purity and commercial pressure, between Altman's ambitions and everyone else's comfort level with the pace of change.
The New Yorker profile offers a particularly revealing anecdote about Altman's management style. In one scene, he is described conducting a meeting where he simultaneously checks his phone, responds to messages, and carries on a conversation -- all while making a decision that will affect hundreds of employees. The message, whether intentional or not: everything is moving too fast for traditional deliberation. Speed is the value that trumps all others.
This is a feature, not a bug, in Altman's worldview. He has said repeatedly that the development of artificial general intelligence -- AI that can match or exceed human cognitive abilities across all domains -- is the most important project in human history. He believes it could arrive within a few years. And he believes that OpenAI, specifically OpenAI under his leadership, is the organization best positioned to build it safely and distribute its benefits broadly.
That belief is sincere. It is also extraordinarily convenient for someone who stands to benefit enormously from the outcome.
Here's where the financial picture gets complicated. Altman has long claimed he holds no equity in OpenAI, a talking point he has deployed in interviews and congressional testimony to signal his alignment with the nonprofit mission. The New Yorker profile probes this claim more carefully. While technically accurate -- Altman does not hold traditional equity in OpenAI's operating entity -- the profile notes that Altman has extensive personal investments in companies that are closely intertwined with OpenAI's success. He has backed chip startups, energy companies, and AI infrastructure ventures that stand to benefit directly from OpenAI's growth. He also reportedly discussed receiving equity as part of the corporate restructuring, a development that would fundamentally change the optics of his "I own nothing" narrative.
Sarah Friar, OpenAI's CFO who joined from Nextdoor in 2024, is quoted in the profile discussing the company's financial trajectory and restructuring plans. The conversion to a for-profit entity is expected to be completed by 2025, and it will require unwinding the original nonprofit structure in a way that satisfies regulators, investors, and the attorneys general of multiple states. It's a legal and financial puzzle of extraordinary complexity -- and one that will ultimately determine how much wealth accrues to Altman and other insiders.
The political dimensions are equally tangled. Altman has cultivated relationships across the political spectrum with an aggressiveness that would make a K Street lobbyist blush. He has visited the White House multiple times under both the Biden and Trump administrations. He testified before Congress in May 2023 and struck a notably conciliatory tone, calling for regulation of AI in a way that impressed senators from both parties. Behind the scenes, according to The New Yorker, he has been far more aggressive in lobbying against specific regulatory proposals that would constrain OpenAI's operations.
The profile describes Altman's relationship with Washington as fundamentally transactional. He supports regulation in principle -- the kind of regulation that creates barriers to entry for smaller competitors while leaving OpenAI's core business model intact. This is not unique to Altman or OpenAI; it's a playbook that Big Tech has run for years. But the stakes are higher here. The technology in question is not a social media platform or a search engine. It is, if Altman is to be believed, the most powerful tool humanity has ever created.
And that raises the central question the profile circles but never quite answers: Should we trust Sam Altman with this?
The case for trusting him rests on results. OpenAI's products work. GPT-4 and its successors are genuinely useful tools that millions of people rely on daily. The company has published safety research, engaged with policymakers, and -- despite the internal turmoil -- maintained a pace of innovation that competitors have struggled to match. Google, Anthropic, Meta, and a host of Chinese firms are all racing to keep up. Whatever his flaws, Altman has built something real.
The case against rests on character. The New Yorker profile accumulates small details that, taken together, paint a portrait of someone whose relationship with the truth is instrumental rather than principled. The selective candor with the board. The carefully constructed public image of selflessness that obscures significant financial interests. The pattern of telling different people different things. The way former allies become adversaries and then, in Altman's telling, simply misunderstood the situation.
One passage in the profile is particularly striking. A former colleague describes Altman as someone who "believes his own story so completely that the distinction between narrative and reality becomes irrelevant." It's a quality that makes him extraordinarily effective as a fundraiser, a spokesperson, and a leader in moments of crisis. It is also, the colleague suggested, a quality that makes genuine accountability nearly impossible.
Recent developments have done little to resolve the tension. In April 2025, OpenAI announced a new partnership with the U.S. government to deploy AI tools in national security applications -- a move that alarmed civil liberties groups and delighted defense hawks. The company also faced renewed scrutiny over its data practices after a report from The New York Times detailed how OpenAI had trained its models on copyrighted material without permission, a legal battle that remains unresolved.
Meanwhile, the competitive pressure continues to intensify. Anthropic, founded by former OpenAI researchers Dario and Daniela Amodei, has positioned itself as the "safety-first" alternative and raised billions from Google and other investors. Meta has open-sourced its Llama models, creating a free alternative that threatens OpenAI's subscription business. And Chinese companies like DeepSeek have demonstrated capabilities that suggest the U.S. lead in AI may be narrower than Washington would like to believe.
Altman's response to all of this has been characteristically bold. He has pushed for massive new investments in computing infrastructure, including a reported $100 billion data center project called Stargate, backed by SoftBank and other investors. He has expanded OpenAI's product line into enterprise software, consumer devices, and AI agents that can perform complex tasks autonomously. And he has continued to make grand pronouncements about the timeline for AGI, predictions that serve the dual purpose of inspiring his employees and terrifying his competitors.
The New Yorker profile captures something essential about this moment in technology history. We are in the middle of what may be the most consequential technological transition since the invention of the internet -- possibly since the invention of electricity. And the person at the center of it is not a dispassionate scientist or a cautious steward. He is a founder-CEO with a founder-CEO's conviction that he alone sees the future clearly, combined with a politician's instinct for managing perception and a dealmaker's comfort with ambiguity.
Sam Altman is not a villain. He is not a hero. He is something more complicated and, in many ways, more interesting: a true believer who is also a master operator, a man who genuinely wants to save the world and also wants to be the one who gets credit for saving it. Whether those two impulses can coexist -- whether the person building the most powerful technology on Earth can be trusted to put humanity's interests above his own -- is not a question The New Yorker can answer.
It's a question the rest of us will have to live with.