Morning folks, and welcome to today's episode called Sam Altman 1985–2023. I’ll open up with an excerpt from a fantastic New Yorker article on Altman:

"Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. And according to an OpenAI board member: “He’s unconstrained by truth. He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

So in this episode, I look at Altman’s progression from app developer to a massively successful investor, eventually accumulating a $3.5 billion fortune from his investments, to taking over and growing Y Combinator, the most influential incubator, and then battling Elon Musk to eventually take charge of OpenAI. By the time we get to the end, at the age of just 38, he’s become one of the most influential and powerful people in business and technology. It’s a cracking story — enjoy.

Sam Altman was born in Chicago in April 1985, the eldest of four children: two brothers and a sister. His mother is a dermatologist, his father a real estate broker. The family moved to Clayton, Missouri when he was four.

Precocious and intelligent, on his eighth birthday he gets an Apple Mac. And like a lot of the tech entrepreneurs I’ve covered, he just immerses himself in it, learning to code. It also became his lifeline because Altman realised from an early age that he was gay, and as he said himself: “Growing up gay in the Midwest in the two-thousands was not the most awesome thing. And finding AOL chat rooms was transformative. Secrets are bad when you’re eleven or twelve.”

He entered an elite private school in 1997, and while there, when he was 16, a Christian group in the school boycotted an assembly about sexuality. Altman stood up in front of the entire assembly, announced that he was gay, and challenged his fellow students on whether they wanted the school to be a repressive place or an open one. He got a standing ovation, and here’s a quote from the school counsellor: “What Sam did changed the school. It felt like someone had opened up a great big box full of all kinds of kids and let them out into the world.”

And look, regardless of whether you like Altman or not, doing what he did back in 2002 took guts, and you can see right there that he had this leadership potential.

He goes to Stanford in 2003 to study computer science and spends a lot of his free time playing poker. He credits poker with teaching him pattern recognition, decision-making with incomplete information, and reading people.

But then, in his second year, he came across an essay that changed his life. It’s called “How to Start a Startup,” written by Paul Graham. Now, I’ve already covered Paul Graham in an episode — fascinating character.

In short, Graham was, and kind of still is, this extremely well-regarded coder, blogger, entrepreneur. In the late 90s he sold his internet company to Yahoo for $49 million, and as a result of his experience, he came to the conclusion that VCs weren’t always the best route for startups. VCs often have business models requiring them to invest large sums, not necessarily because startups need that much capital, but because it’s what their fund structures dictate. This can lead to startups receiving more money than they need, pushing them towards unsustainable growth strategies. And also, in return, they take a lot of control. In Graham’s view, most early startups don’t need that much money — they need guidance more than anything else. Graham also felt that it was vital for the founders to stay in control.

So in 2005, Graham, with a few other people, started Y Combinator, known as YC, with $200,000. Over the last 20 years, YC has invested in 5,000 startups. These companies collectively hold a combined valuation exceeding $600 billion. Notably, more than 400 of these companies are valued at over $100 million, and over 100 have achieved valuations surpassing $1 billion: Airbnb, Stripe, Coinbase, DoorDash, Dropbox.

Anyway, when they launched their first program in summer 2005, they announced a “Summer Founders Program,” offering young teams $6,000 per founder, hands-on mentorship over three months, and dinner every Tuesday night in Graham’s kitchen. In return, YC took roughly 6% equity. YC’s average ownership of its companies, diluted by subsequent venture funding, is usually around three per cent.

Altman and his boyfriend at that time, Nick Sivo, were working on an app called Loopt. It was an early social networking app that allowed users to see their friends' real-time locations on a digital map using basic cell phones. This was two years before the iPhone launched.

Loopt was accepted into the first batch of eight startups. Also within that fund was Reddit. So in June 2005, at just 19, in classic Silicon Valley tradition, he drops out of Stanford and moves to Cambridge, Massachusetts.

Altman works so relentlessly that he develops scurvy from eating almost nothing but ramen. Paul Graham wrote that Altman has “the kind of ambition that you don't see often.”

One of the key elements to YC is Demo Day. This is where founders present their companies to an invite-only audience of angel investors and VCs.

For that first-ever Demo Day, there were just 15 angel investors, no VCs, and Loopt was by far the star, generating the most interest. While I couldn’t find out how much angel investment Loopt got that day, the hype around the app led to a $5 million investment from Sequoia a few months later. Over the next few years, Loopt raises a total of just over $30 million. Its success in fundraising is mainly put down to Altman’s persistence, or to quote him: “The way to get things done is to just be really fucking persistent.”

Now, in terms of his leadership style, his staff really liked him, but some mentioned his tendency to exaggerate, or according to one employee: “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing.’”

And this pattern — some would call it exaggerating, others accuse Altman of being duplicitous or outright lying — resulted in the board considering removing Altman as CEO because concerns were raised about how he was reporting user growth numbers. But because Altman also inspired fierce loyalty among employees, he wasn’t pushed out.

But the long and the short of it is that Loopt didn’t really break through. Altman eventually managed to sell it in 2012 for $43 million — not a great ROI for anyone, although Altman does cash out with $5 million, so not bad.

Now crucially, while all of this is going on, he starts angel investing, and he’s pretty damn good at it. For example, he gives a $15,000 cheque to the Collison brothers, the founders of Stripe, before the company is even officially incorporated. It was in YC’s 2010 class. Altman got a 2% stake. Stripe is worth $159 billion at the time of recording. Now it would have been diluted over the years, but we can see over time that he is an extremely canny investor because he puts most of his Loopt money into a venture fund called Hydrazine Capital, raising a total of $21 million, most of it coming from Peter Thiel.

And the strategy is that 75% will be invested into YC companies.

By 2014, he had invested in 40 companies, and five of them increased in value by 100 times or more: the likes of Airbnb, Pinterest, Instacart, Zenefits — we already mentioned Stripe. In total, over the last 15 years, Altman has invested in over 400 companies, and at the time of recording these investments give him a net worth of $3.5 billion.

Now, as a result of all of these early investments, Altman is spending more and more time at YC, and Paul Graham is very impressed with Altman. For example, he had written that the two founders he most often referenced when advising startups were Steve Jobs and Altman: “On questions of design, I ask ‘What would Steve do?’ but on questions of strategy or ambition I ask ‘What would Sam do?’”

And here’s another great and prescient quote from Graham: “Sam is extremely good at becoming powerful. You could parachute him into an island full of cannibals and come back in five years and he’d be the king.”

And so in 2014, when Graham and his wife decided to move to the UK, Altman was named president of YC.

Altman is hugely ambitious for Y Combinator. His aim is to grow it tenfold. He launched a $700 million Continuity Fund. This allowed YC to back its graduates all the way to late-stage growth rounds.

And he starts pulling hard tech into the portfolio — nuclear fusion companies, deep science, AI — because he realises that if you want to build trillion-dollar companies, you need to focus on major scientific advances. So he tells his partners the era of frivolous apps is over.

This leads to friction, and some senior people within YC leave.

Now, a defining moment in Altman’s career happens in August 2014 when Elon Musk tweets: “We need to be super careful with AI. Potentially more dangerous than nukes.”

Altman agrees. Both of them are specifically worried about Google’s acquisition of DeepMind. The worry is that a single corporate entity could monopolise AI.

In May 2015, Altman sent a late-night email to Musk proposing that they build a “Manhattan Project” for AI. Musk responds within two hours — the idea is “probably worth a conversation.”

That July, Altman and Musk hosted a dinner at the Rosewood Sand Hill Hotel in Menlo Park.

The attendees include Patrick Collison, the Stripe co-founder; Greg Brockman, who at that time was Stripe’s CTO; Ilya Sutskever, a star researcher at Google Brain; and Dario Amodei, who was also at Google, along with a handful of others. Altman and Musk’s pitch to the researchers: “We will give you the resources of a massive corporation — the computers, the salaries — but the mission of a non-profit. You won't be building products for Google; you'll be building AGI (artificial general intelligence) for everyone.”

The focus of this pitch was on safety, and this is vital. These star researchers were, and are, very concerned about the negative impacts of AGI.

That December 2015, OpenAI was formally announced as a non-profit AI research company with a pledge of $1 billion. But the actual cash that flows in over the following years is around $130 million: $38 million from Musk, $10 million from Altman, and the rest coming from a group that included Reid Hoffman, Peter Thiel, and many more.

The billion was a commitment, and Musk had promised to fund any shortfall on that commitment.

In terms of structure, Musk and Altman were co-chairs, Brockman leaves Stripe and becomes the CTO, and Ilya Sutskever is research director, while Amodei was the lead for AI safety.

At this stage there isn’t a CEO, and this is again a result of their concerns with safety. The founders deliberately wanted to avoid the normal Silicon Valley model of a single all-powerful CEO who is driven by aggressive commercial incentives. Remember, this is a non-profit.

Here’s a quote from Sutskever explaining why a CEO role isn’t suited to take charge of AI: “Any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility. The people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it. Someone who just tells people what they want to hear.”

And I understand where he’s coming from here. The personality traits required to be a “great CEO” — ambition, political manoeuvring, ego — were the exact opposite of the traits required to safely manage AGI: humility, caution, transparency.

The first trouble in OpenAI starts in mid-2017 when the researchers build a bot that beats world-class players at Dota 2, a very complex computer game.

But it cost them around $30 million to achieve just this. So, to move forward, they realise they’re going to need a lot of money, and this opens the conversation about becoming a for-profit company. This then turns into a brutal three-way power struggle between Musk, Brockman, and Altman, which is now playing out at this very time in a courtroom.

Musk's solution back in 2017 is to merge OpenAI into Tesla, and he demands a 60% equity stake and absolute control. His leverage is that he is withholding the $870 million of the $1 billion pledge. But the board doesn't like it, so in February 2018, Musk walks away.

Now, at this stage, Greg Brockman is kind of running OpenAI. He’s described as obsessive, brilliant, routinely working 100-hour weeks. But his leadership style is described as abrasive and narrowly focused. And crucially, at a time when they need to raise money, he’s not seen as someone who can charm investors.

Altman can. He has that experience.

And so in March 2019, Altman becomes CEO, and he’s backed by Brockman and Sutskever because he apparently gives them a private assurance. This is a quote from Brockman: “He unilaterally told us that he’d step down if we ever both asked him to.”

Now, while all of this is going on, we have to remind ourselves that Altman is still head of YC, and there were allegations within YC that he had been “constantly lying” about his time commitment and using the YC brand to prioritise his own investments.

This leads Paul Graham to tell Altman to choose between YC and OpenAI. Publicly, it appears that the parting was amicable. Graham even writes a tweet confirming that he would have liked Altman to stay. However, the detailed New Yorker article has the following: “Graham told Y.C. colleagues that, prior to his removal, ‘Sam had been lying to us all the time.’”

Anyway, Altman leaves and can now devote most of his time and attention to OpenAI, and at this time ideological fault lines are opening. Altman later describes these factions in internal emails as “tribes.” One tribe is focused on safety. The other — of which Altman is now aligned — argues that the only path to safe AI is to keep building, keep deploying, and learn from what happens.

So this is definitely a shift in position for Altman, and it’s a shift away from OpenAI’s initial foundations whereby safety took priority.

In March 2019, Altman announced a radical restructuring. OpenAI creates a for-profit subsidiary controlled by the original non-profit. The structure is called “capped-profit” — investors can receive returns up to 100 times their investment, with anything beyond that flowing back to the non-profit.

The logic is straightforward: you can’t train advanced AI models on charitable donations alone. You need serious capital, and investors will require an ROI.

Dario Amodei was the key person behind the safety tribe, and he had started questioning the founders’ motives more openly. “Everything was a rotating set of schemes to raise money,” he later wrote in his notes. “I felt like what OpenAI needed was a clear statement of what it would do, what it would not do, and how its existence would make the world better.”

Now, he had every right to be suspicious of both Altman’s and Brockman’s motives. Altman’s because he is shifting his position, and as for Brockman’s motives, well here’s what he wrote in his diary in 2017: “So what do I really want? ... Financially what will take me to $1B.”

That’s not a great look for someone who was publicly claiming that his priority was to develop safety-first AI through a non-profit company.

So Amodei decided to write a charter for the company with numerous safety provisions.

Altman assured Amodei that he fully backed the most important provisions within the charter and that they were non-negotiable.

Then in July 2019, Altman announced that Microsoft was investing $1 billion in OpenAI.

Now, the structure of the deal is really worth looking at.

In exchange for that billion, Microsoft gets two things. First, OpenAI commits to Microsoft’s Azure as its exclusive cloud provider, which means almost all of Microsoft's money flows straight back to Microsoft's own servers. It's basically a closed loop.

Second, Microsoft gets exclusive commercial rights to OpenAI's models.

There's one more clause worth knowing about — the AGI clause. Microsoft's commercial rights automatically expire the moment OpenAI's board decides they've achieved artificial general intelligence, a system that can do essentially anything a human mind can do.

It was built in as a safety tripwire: if something that powerful ever gets created, it shouldn't be owned by a corporation.

But here's where it gets messy. As part of the Microsoft deal, a key piece of Amodei's original charter was quietly dropped — the so-called “merge and assist” clause. It had said that if any other company got closer to AGI first, OpenAI would stop competing and help them do it safely. A kind of peace treaty for the AI race. Microsoft now had the power to block any such merger.

For Amodei, merge-and-assist was vital — the thing that prevented AI development from becoming a winner-takes-all race where safety gets sacrificed for speed. And now it was gone.

He later said that 80% of the charter had been betrayed.

Now look, maybe Amodei’s view is idealistic, and maybe he’s being over-cautious, but when I hear the likes of JD Vance say: “The A.I. future is not going to be won by hand-wringing about safety,” I get worried. I’m not an AI expert, most of us aren’t, so we need to listen to the experts.

The “AI Impacts” survey remains the largest survey of AI researchers ever conducted, surveying 2,778 experts. It found that 34% of researchers believe there is at least a 10% chance that AI could lead to human extinction.

For me, that’s enough to say: let’s be careful here.

Because the point is, if the 64% are right and there is no existential threat, then fine — being cautious just slows us down a bit. Nobody dies. But if the 34% are right, not being cautious could spell the end of humankind.

Anyway, when Amodei confronts Altman, he says Altman lied, denying that the merge-and-assist clause even existed, despite Amodei showing him the actual text of the charter.

So this caused a huge rift and bitter infighting, eventually resulting in Amodei and his sister Danielle, who also had a senior role, leaving OpenAI. They launch Anthropic, the company behind Claude, which I must say I love. Full disclosure: I use ChatGPT, Gemini, and Claude. They all have their uses, but I’ve definitely found over the last three to four months that I’m using Claude more than GPT.

Then in June 2020, GPT-3 arrives. While it doesn’t get a huge amount of mainstream coverage, the tech press were wowed. It was very impressive, mainly because it showed that making models bigger doesn’t just make them slightly better — it actually makes them significantly better in a noticeable way.

So GPT-3 laid the groundwork for what was coming down the tracks, and I’m pretty sure most of us remember it: November 30, 2022, ChatGPT launches.

Within five days it has a million users. Within two months it has 100 million, making it the fastest-growing consumer application in history.

Paul Krugman, Nobel Laureate and New York Times columnist: “ChatGPT is a big deal. It’s not just a toy. It’s a tool that will change how we think, how we write, and how we teach.”

I remember using it for the first time and just being totally blown away by it. Not just because of what it could do, which was really impressive, but because you knew this was just the start of something really huge.

It’s fair to say that at this moment, November 2022, ChatGPT became one of the most immediately transformative technologies, and Sam Altman overnight became one of the most powerful and influential people in the technology business world.

And he leans into it — big time.

He goes on a global tour, meeting heads of state, regulators, and tech ministers in places around the world. He’s positioning himself as the bridge between the AI industry and world governments, and you’ve got to say, he does a good job.

When he’s before the Senate Judiciary Committee in May 2023, he’s asked if he made “a lot of money.” He replied: “I have no equity in OpenAI ... I’m doing this because I love it.”

And it’s true — he doesn’t have equity in OpenAI. Having said that, he does have equity in companies that will go on to make a lot of money from OpenAI. For example, he is the largest shareholder in Helion Energy, a nuclear fusion startup, and at the time of this recording they are in talks to sign a multi-billion-dollar deal with OpenAI.

Anyway, back to the Judiciary Committee. Altman went on to say: “My worst fears are that we cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that.”

He proposed a new federal agency to oversee advanced A.I. models.

So this all lands well because you have Altman with apparently no direct financial interest in OpenAI, refusing to sugar-coat the danger. He is not playing the techno-optimist, he’s playing the realist — the responsible adult warning about his own product — and this becomes the initial narrative around him.

Yet behind closed doors within OpenAI, Altman by this stage is now very much on the opposite side of the safety debate.

And here’s an excerpt from an email sent by Jan Leike, one of the world’s leading researchers in AI alignment, who held a very senior position within OpenAI.

He sent this right after the release of GPT-4, just a few months after ChatGPT, in March 2023, so this is before Altman’s appearance at the Senate Judiciary Committee:

“OpenAI has been going off the rails on its mission. We are prioritizing the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third. Other companies like Google are learning that they should deploy faster and ignore safety problems.”

And so I’m reminded of the words of Sutskever when he refers to the kind of person who wants to be the CEO of a company that has such powerful technology: “The people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it. Someone who just tells people what they want to hear.”

And look, I understand that Altman is in an extremely tricky position. They started with great intentions of being a non-profit and expected that they could get to where they wanted by staying a non-profit, but reality got in the way.

They needed investment, and investors want an ROI. I get that.

And I understand that Altman has had to balance safety issues with commercial interests. As he said himself: “This was the most fun job in the world until the day we launched ChatGPT.” Since the launch, the decisions have gotten very difficult.

It’s easy to be idealistic when you’re not the one responsible for thousands of employees and who has to answer to your investors, so I understand all of that.

And we’re going to have at least one more episode on Altman where we will get into his dramatic firing, the court case with Elon Musk, and I’ll be able to get maybe a clearer picture of his character.

But based on what we now know of Altman up to 2023, we know that he’s very smart, a very savvy investor, and in the words of Paul Graham, “extremely good at becoming powerful.”

But according to those who have worked closest with him, he’s also duplicitous, untrustworthy, deceitful. I just don’t think I want a guy like that in charge of such a powerful technology.

Anyway, he makes for a fantastic business story.

...

And this brings us to listeners’ emails, and this suggestion actually doesn’t come from an email. It’s a comment from Jeroen — “yer-own” — and he loved the Bill Ackman episode and would love to hear more like them, especially on the Tiger Cubs. These were the young recruits who started their careers working in Julian Robertson's Tiger Fund.

It’s a great suggestion, Jeroen, and I have them, as well as Robertson, on my list.

And remember, if you have any comments, any corrections, or any story you'd like me to cover, email me at: info@gbspod.com

All the best, folks.