Sam Altman, founder of OpenAI, is doing a European tour at the moment, and I was one of the attendees of his London talks at the Londoner, a swanky new London hotel, organised by the Oxford Guild Business Society. The format was Q&A moderated by Abbas Kazmi, chairman of the Guild followed by questions from the audience.
I thought It would be interesting to share a few nuggets with readers of VC Cafe, given the huge impact OpenAI is having on the world, but also as part of the general debate about AI regulation, AGI, etc.
- Sam Altman got involved with AI because it was “the obvious next thing” that was going to impact society. He only moved to a full time role at OpenAI in 2019.
- Back in 2018, they didn’t think about LLMs. He’s now convinced that LLMs are part of the solution for AGI and sees it as a benefit that can we interact with AI using natural language, because it’s more usable that way and also because it’s going to be easier to understand how the machine resonates.
- He sees AI as a tool for reducing bias over time. Humans are hugely biased and the current tools also have bias because they were trained on data created by humans. But in the future he sees AI models becoming less biased than people.
- Sam believes that when users ask a model for pictures inspired by a certain artist, it should be considered derivative work and the artist should be paid something. He’s not sure how much or how would it work but he sees it as a question of 1) law and 2) morals.
- The best thing students can do to adjust to an AI powered workforce is to ‘lean in’ on the technology. Get informed and become proficient in using the tools.
- OpenAI is working on multi-modal input, i.e. you’ll be able to speak to ChatGPT and for example, a partnership with BeMyEyes will read the results out loud for visually impaired users. He sees it as a step forward for supporting accessibility and inclusiveness for the tool
- He believes that the arc of technology will be to an increased standard of living and that AI tools bring ‘intelligence’ to the masses, which will help reduce inequality.
- On regulation, Sam believes there should be an international body that regulates AI. There’s one specific scenario which should worry us as a society is what if AGI happens faster than we expect and then society isn’t ready. He did say that the EU AI Act in its current form may stifle innovation. He did say that we need to be careful that in the case of AGI for example, bad actors don’t take the power to create new pathogens that can kill humanity for example… he said any company in the size of OpenAI should be able to deal with regulatory requirements.
- The impact on jobs – today AI is good with task automation but not with replacing full jobs. He believes that the expectations of workers in a job will go up as we become more productive, and it may have positive consequences, i.e. doctors being able to spend more time with patients as parts of their jobs get automated. In the long term, a lot of the current jobs will go away. It’s been the case throughout history as new technology was introduced, the difference is that now it’s happening very fast.
- The most impactful use of OpenAI today in Sam’s view is the automation of coding. As a whole he thinks coding automation had the most impact on productivity. He was also inspired by people who engage with ChatGPT to get a second opinion on medical advice, and pointed out that some people run their entire business on GPT-4. From copywriting to ad creation, etc.
- He’s most excited about AGI and what it can do to create novel scientific contributions. He believes that super intelligence will happen and it will happen sooner than what people think. He doesn’t thing it will be one giant model that controls everything. He believes that like in society, we have a few billion people who all have some level of intelligence and together we form the scaffolding for society and so will AGI be billions of agents collaborating. There will be better/different approaches to this than LLMs but the road to AGI passes through LLMs.
- He believes we are already seeing people develop feelings or attachment to AI as it can alleviate loneliness and be a great companion (basically the movie “HER”), but we should be very careful.
- He was asked why is OpenAI not open anymore. He said they will continue to share information and try to balance their new structure with developing the whole space. He also said they will release some open source models and that he’s excited about the progress in the open source community when it comes to LLMs (though he hopes to keep an edge).
- On OpenAI’s structure, he said he wants a small number of offices with high density of people in them. No plans for Berlin office anytime soon.
- He finally said that OpenAI employs a number of philosophers and they think deeply about the impact of technology on society as a whole.
Thanks for reading. I recommend a few of my earlier pieces on generative AI:
- Generative AI: incumbents vs. upstarts
- Benchmarking LLMs performance
- AutoGPT is generative AI goalseeking
- The ever growing Israeli generative AI landscape
- Use cases for generative AI in gaming
Excited to continue investing early stage founders building the future of generative AI with Remagine Ventures.
Latest posts by Eze Vidra (see all)
- Weekly #FIRGUN Newsletter – November 1 2024 - November 1, 2024
- The Art of Non-Consensus Investing: Unlocking Venture Capital’s Hidden Gems - October 31, 2024
- Weekly #FIRGUN Newsletter – October 25 2024 - October 25, 2024