How ChatGPT Kicked Off an A.I. Arms Race
One day in mid-November, workers at OpenAI got an unexpected assignment: Release a chatbot, fast.
The chatbot, an executive announced, would be known as “Chat with GPT-3.5,” and it would be made available free to the public. In two weeks.
The announcement confused some OpenAI employees. All year, the San Francisco artificial intelligence company had been working toward the release of GPT-4, a new A.I. model that was stunningly good at writing essays, solving complex coding problems and more. After months of testing and fine-tuning, GPT-4 was nearly ready. The plan was to release the model in early 2023, along with a few chatbots that would allow users to try it for themselves, according to three people with knowledge of the inner workings of OpenAI.
But OpenAI’s top executives had changed their minds. Some were worried that rival companies might upstage them by releasing their own A.I. chatbots before GPT-4, according to the people with knowledge of OpenAI. And putting something out quickly using an old model, they reasoned, could help them collect feedback to improve the new one.
So they decided to dust off and update an unreleased chatbot that used a souped-up version of GPT-3, the company’s previous language model, which came out in 2020.
Thirteen days later, ChatGPT was born.
In the months since its debut, ChatGPT (the name was, mercifully, shortened) has become a global phenomenon. Millions of people have used it to write poetry, build apps and conduct makeshift therapy sessions. It has been embraced (with mixed results) by news publishers, marketing firms and business leaders. And it has set off a feeding frenzy of investors trying to get in on the next wave of the A.I. boom.
It has also caused controversy. Users have complained that ChatGPT is prone to giving biased or incorrect answers. Some A.I. researchers have accused OpenAI of recklessness. And school districts around the country, including New York City’s, have banned ChatGPT to try to prevent a flood of A.I.-generated homework.
Yet little has been said about ChatGPT’s origins, or the strategy behind it. Inside the company, ChatGPT has been an earthshaking surprise — an overnight sensation whose success has created both opportunities and headaches, according to several current and former OpenAI employees, who requested anonymity because they were not authorized to speak publicly.
The Rise of OpenAI
The San Francisco company is one of the world’s most ambitious artificial intelligence labs. Here’s a look at some recent developments.
- ChatGPT: OpenAI will soon offer a new version of its cutting-edge chatbot for a $20 monthly subscription, which will include round-the-clock access, faster responses and new features.
- DALL-E 2: The system lets you create digital images simply by describing what you want to see. But for some, image generators are worrisome.
- GPT-3: With mind-boggling fluency, the natural-language system can write, argue and code. The implications for the future could be profound.
An OpenAI spokesman, Niko Felix, declined to comment for this column.
Before ChatGPT’s launch, some OpenAI employees were skeptical that the project would succeed. An A.I. chatbot that Meta had released months earlier, BlenderBot, had flopped, and another Meta A.I. project, Galactica, was pulled down after just three days. Some employees, desensitized by daily exposure to state-of-the-art A.I. systems, thought that a chatbot built on a two-year-old A.I. model might seem boring.
But two months after its debut, ChatGPT has more than 30 million users and gets roughly five million visits a day, two people with knowledge of the figures said. That makes it one of the fastest-growing software products in memory. (Instagram, by contrast, took nearly a year to get its first 10 million users.)
The growth has brought challenges. ChatGPT has had frequent outages as it runs out of processing power, and users have found ways around some of the bot’s safety features. The hype surrounding ChatGPT has also annoyed some rivals at bigger tech firms, who have pointed out that its underlying technology isn’t, strictly speaking, all that new.
ChatGPT is also, for now, a money pit. There are no ads, and the average conversation costs the company “single-digit cents” in processing power, according to a post on Twitter by Sam Altman, OpenAI’s chief executive, likely amounting to millions of dollars a week. To offset the costs, the company announced this week that it would begin selling a $20 monthly subscription, known as ChatGPT Plus.
Despite its limitations, ChatGPT’s success has vaulted OpenAI into the ranks of Silicon Valley power players. The company recently reached a $10 billion deal with Microsoft, which plans to incorporate the start-up’s technology into its Bing search engine and other products. Google declared a “code red” in response to ChatGPT, fast-tracking many of its own A.I. products in an attempt to catch up.
Mr. Altman has said his goal at OpenAI is to create what is known as “artificial general intelligence,” or A.G.I., an artificial intelligence that matches human intellect. He has been an outspoken champion of A.I., saying in a recent interview that its benefits for humankind could be “so unbelievably good that it’s hard for me to even imagine.” (He has also said that in a worst-case scenario, A.I. could kill us all.)
As ChatGPT has captured the world’s imagination, Mr. Altman has been put in the rare position of trying to downplay a hit product. He is worried that too much hype for ChatGPT could provoke a regulatory backlash or create inflated expectations for future releases, two people familiar with his views said. On Twitter, he has tried to tamp down excitement, calling ChatGPT “incredibly limited” and warning users that “it’s a mistake to be relying on it for anything important right now.”
He has also discouraged employees from boasting about ChatGPT’s success. In December, days after the company announced that more than a million people had signed up for the service, Greg Brockman, OpenAI’s president, tweeted that it had reached two million users. Mr. Altman asked him to delete the tweet, telling him that advertising such rapid growth was unwise, two people who saw the exchange said.
OpenAI is an unusual company, by Silicon Valley standards. Started in 2015 as a nonprofit research lab by a group of tech leaders including Mr. Altman, Peter Thiel, Reid Hoffman and Elon Musk, it created a for-profit subsidiary in 2019 and struck a $1 billion deal with Microsoft. It has since grown to around 375 employees, according to Mr. Altman — not counting the contractors it pays to train and test its A.I. models in regions like Eastern Europe and Latin America.
From the start, OpenAI has billed itself as a mission-driven organization that wants to ensure that advanced A.I. will be safe and aligned with human values. But in recent years, the company has embraced a more competitive spirit — one that some critics say has come at the expense of its original aims.
Those concerns grew last summer when OpenAI released its DALL-E 2 image-generating software, which turns text prompts into works of digital art. The app was a hit with consumers, but it raised thorny questions about how such powerful tools could be used to cause harm. If creating hyper-realistic images was as simple as typing in a few words, critics asked, wouldn’t pornographers and propagandists have a field day with the technology?
To allay these fears, OpenAI outfitted DALL-E 2 with numerous safeguards and blocked certain words and phrases, such as those related to graphic violence or nudity. It also taught the bot to neutralize certain biases in its training data — such as making sure that when a user asked for a photo of a C.E.O., the results included images of women.
These interventions prevented trouble, but they struck some OpenAI executives as heavy-handed and paternalistic, according to three people with knowledge of their positions. One of them was Mr. Altman, who has said he believes that A.I. chatbots should be personalized to the tastes of the people using them — one user could opt for a stricter, more family-friendly model, while another could choose a looser, edgier version.
OpenAI has taken a less restrictive approach with ChatGPT, giving the bot more license to weigh in on sensitive subjects like politics, sex and religion. Even so, some right-wing conservatives have accused the company of overstepping. “ChatGPT Goes Woke,” read the headline of a National Review article last month, which argued that ChatGPT gave left-wing responses to questions about topics such as drag queens and the 2020 election. (Democrats have also complained about ChatGPT — mainly because they think A.I. should be regulated more heavily.)
As regulators swirl, Mr. Altman is trying to keep ChatGPT above the fray. He flew to Washington last week to meet with lawmakers, explaining the tool’s strengths and weaknesses and clearing up misconceptions about how it works.
Back in Silicon Valley, he is navigating a frenzy of new attention. In addition to the $10 billion Microsoft deal, Mr. Altman has met with top executives at Apple and Google in recent weeks, two people with knowledge of the meetings said. OpenAI also inked a deal with BuzzFeed to use its technology to create A.I.-generated lists and quizzes. (The announcement more than doubled BuzzFeed’s stock price.)
The race is heating up. Baidu, the Chinese tech giant, is preparing to introduce a chatbot similar to ChatGPT in March, according to Reuters. Anthropic, an A.I. company started by former OpenAI employees, is reportedly in talks to raise $300 million in new funding. And Google is racing ahead with more than a dozen A.I. tools.
Then there’s GPT-4, which is still scheduled to come out this year. When it does, its abilities may make ChatGPT look quaint. Or maybe, now that we’re adjusting to a powerful new A.I. tool in our midst, the next one won’t seem so shocking.