Quantcast
Channel: Artificial Intelligence News, Analysis and Resources - The New Stack
Viewing all articles
Browse latest Browse all 317

What OpenAI CEO Sam Altman Really Expects In AI’s Future

$
0
0

Last week OpenAI launched GPT-4o, the latest voice-interactive version of the powerful chatbot. But just days before its launch, OpenAI CEO Sam Altman had shared some surprisingly candid thoughts in an hour-long interview on the “All-In” podcast.

Hosted by a panel of high-level venture capitalists — Chamath Palihapitiya, Jason Calacanis, David O. Sacks and David Friedberg — the podcast gave Altman a chance to open up in a more conversational setting.

In a wide-ranging hour-long discussion, Altman provided clues about his long-term vision for OpenAI — and even the likely trajectory for the world’s development of artificial general intelligence. As the head of the world’s top AI company, Altman answered questions about safety, affordability, economic models, and the next great thing.

And somewhere along the way, maybe he also gave a glimpse of the shape of things to come.

Viscerally Different

When announcing GPT-4o, Altman identified its place in the evolution of OpenAI’s products. “The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful. Talking to a computer has never felt really natural for me; now it does.”

But it’s on the podcast that you get a sense of how fundamental that is to his long-term vision. After a discussion about what will be humanity’s next great piece of technology, Altman added this revealing aside. “I think voice is a hint to whatever the next thing is.

“Like, if you can get voice interaction to be really good, it feels — I think that feels like a different way to use a computer.”

What About Safety?

Last week OpenAI also made headlines by disbanding the team they’d created 10 months earlier to explore solutions “to steer and control AI systems much smarter than us,” as the two leaders of the team both resigned. (CNBC notes that Ilya Sutskever was one of the board members who’d voted to oust Altman in November, while Jan Leike complained on Twitter Friday that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”)

So it’s interesting to hear Altman’s comments about November’s ouster, just days before those resignations were announced. “Look, obviously not all of those board members are my favorite people in the world. But I have serious respect for the gravity with which they treat AGI and the importance of getting AI safety right. And even if I stringently disagree with their decision-making and actions, which I do, I have never once doubted their integrity or commitment to the sort of shared mission of safe and beneficial AGI.”

And Altman also directly addressed the issue of risk. “I think there will come a time in the not-super-distant future, where the frontier AI systems are capable of causing significant global harm.” What he hopes for is an international agency ensuring “reasonable safety testing” of the most powerful systems so “you know, these things are not going to escape and recursively self-improve or whatever…” He added later that “Maybe nothing will happen. But I think it is part of our duty, and our mission, to talk about what we believe is likely to happen, and what it takes to get that right…”

Altman likened it to already-existing oversight for nuclear weapons or biotechnology “or things that can really have a very negative impact, way beyond the realm of one country.”

He had specific suggestions for a methodology (like safety-testing the output rather than having government regulators trying to review all of a company’s internal codebases and assess which weights are being used in machine learning models). But Altman also has another concern. “I’d be super nervous about regulatory overreach here. I think we can get this wrong by doing way too much, or even a little too much.”

He immediately concedes we can also “get this wrong by doing not enough.” But Altman spent five years as the president of startup accelerator YCombinator and also recognizes the entrepreneurial perspective. “We have seen regulatory overstepping or capture just get super-bad in other areas.”

So he’d like to see regulatory agencies focusing on, for example, only projects costing above $10 billion. “I don’t think it puts any regulatory burden on startups.”

‘To Go Invent the Future’

Early in the conversation, Altman reiterated that making advanced technology available free was “a super-important part of our mission.” He wants not just to build AI tools, but to “make them super-widely available — free or, you know, not-that-expensive, whatever that is — so that people can use them to go kind of invent the future, rather than the magic AGI in the sky inventing the future, and showering it down upon us.

“That seems like a much better path. It seems like a more inspiring path. I also think it’s where things are actually heading. So it makes me sad that we have not figured out how to make GPT4-level technology available to free users. It’s something we really want to do…”

Q: It’s just very expensive, I take it?

Sam Altman: It’s very expensive.

But Altman still predicts we’ll see a world with cheap artificial intelligence. “It’s important to us, it’s clearly important to users, and it’ll unlock a lot of stuff.”

It’s a theme Altman reiterated last week when announcing GPT-4o. “I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that.

“Our initial conception when we started OpenAI was that we’d create AI and use it to create all sorts of benefits for the world. Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from.

“We are a business and will find plenty of things to charge for, and that will help us provide free, outstanding AI service to (hopefully) billions of people.”

Beyond ‘Universal Basic Income’

Altman has clearly been thinking about the future — about disruption, economic models, and what the world will look like. On the podcast, Altman said he was “super-excited” about the possibility of an AI tutor, and the possibility of “doing faster and better scientific discovery… that will be a triumph.”

But Altman also famously launched a 3,000-person trial of Universal Basic Income. So it was interesting to hear his latest candid thoughts on a new variation that incorporates artificial intelligence.

First, he addressed the conventional idea of a Universal Basic Income. “Giving people money is not going to go solve all the problems,” Altman said. “It is certainly not going to make people happy. But it might solve some problems, and it might give people a better horizon with which to help themselves.”

But Altman has another idea. “Now that we see some of the ways that AI is developing… I wonder if the future looks something more like Universal Basic Compute than Universal Basic Income… Everybody gets like a slice of GPT-7’s compute, and they can use it, they can re-sell it, they can donate it to somebody to use for cancer research. But what you get is not dollars but this slice.

“You own part of the productivity!”

Preserving What We Care About

There were occasionally even glimpses of Altman’s personal feelings — like when he addressed an Apple ad where art materials are crushed in a hydraulic place (to be replaced by an Apple tablet).

“I’m obviously hugely positive on AI — but there is something that I think is beautiful about human creativity and human artistic expression,” Altman said. “And you know, for an AI that just does better science? Great, bring that on. But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out…”

Altman paused in mid-sentence — and then said “It’s going to happen. It’s going to be a tool that will lead us to greater creative heights.

“But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here.”

There was also some discussion about copyrights and training data, with Altman revealing that OpenAI decided “not to do music… partly because of exactly these questions of where you draw the line.”

‘This Is Going to Happen”

So what happens when we finally attain that holy grail of AGI — a machine with capabilities matching or surpassing human intelligence? “I think a lot of the world is, understandably, very afraid of AGI,” Altman acknowledged. “Or very afraid of even current AI, and very excited about it — and even more afraid, and even more excited about where it’s going.”

“And we wrestle with that, but I think it is unavoidable that this is going to happen.”

He seemed aware of the potential for a very real disruption, agreeing that “a lot of stuff is going to change. And change is pretty uncomfortable for people. So there’s a lot of pieces that we’ve got to get right.”

Still, in the end, Altman also believes it will all prove “tremendously beneficial. But we do have to navigate how to get there in a reasonable way.” At one point Altman even stressed that OpenAI’s mission “is to build toward AGI — and to figure out how to broadly distribute its benefits.”

And Altman didn’t leave the interview before he’d included a sincere statement of his honest enthusiasm for the long road that lies ahead.

“I really care about AGI — and think this is the most interesting work in the world.”

The post What OpenAI CEO Sam Altman Really Expects In AI’s Future appeared first on The New Stack.

Altman views OpenAI's mission as building an artificial general intelligence — and to broadly distribute its benefits.

Viewing all articles
Browse latest Browse all 317

Trending Articles