5/5/2023: Superhuman Intelligence Will Arrive Within a Decade?
Godfather of AI said the probability is high enough for mankind to worry.
Geoff Hinton, Turing Award recipient and pioneer of neural networks, quit Google recently. According to NYTimes, he left Google so that he can speak freely about the potential danger of AI. I mean many of us are scared of AI because it automates away a big portion of knowledge work and because of that, AI chips away many intellectuals’ identity. In that regard, it was not too different from factory automation, which chipped away many blue collar skilled workers’ identity. But Dr. Hinton’s concerns seem to be beyond that. One thing he said in the NYTimes report really struck me:
The idea that this stuff could actually get smarter than people — a few people believed that. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
WOW!!! Basically, the Godfather of AI thinks AGI might be imminent. Before he quit Google, he was very involved with Google’s AI development and the chief scientist of OpenAI and the chief AI scientist of Meta are both his graduate students. He must know a lot more than us. In another interview, he gave a bit more details why he thinks that’s the case. He is not just talking about Artificial General Intelligence though. He is talking about Superhuman Intelligence. Namely, AI has superior intelligence than humans.
Dr. Hinton said AI is a new form of intelligence and in certain aspects, this form of intelligence is better than human intelligence. It might still have trouble driving a car but it has perfect memory and it is very good at “few shot” learning. Namely, you give LLMs like ChatGPT a bit of information and it can learn new concepts extremely quickly. This new form of intelligence is also good at knowledge transfer. We can’t download Einstein or Stephen Hawking’s intelligence but we can save all the model weights and replicate any AI models ever built. Dr. Hinton argues that artificial intelligence is a better form than human intelligence in the knowledge transfer regard and could evolve more consistently.
Well, I am mildly depressed. It's not because knowledge workers like me will be obsolete. I am already 46 so being obsolete is inevitable. I am mildly depressed because there’s a nontrivial probability that AI will threaten humanity either from all the changes and chaos it engenders or from the abuse of power by people who have control over the AI. I am thankful that Dr. Hinton speaks up for all of us but in the meantime I am not sure if we are on the right path to use AI to better human conditions.
Update: An even more detailed live interview with Dr. Hinton.
This technology (AGI and beyond) is inevitable, it is crucial that good folks develop it first, and establish the responsible norm with it. What would have happened if Nazi or Soviets had developed nukes first 80 years ago?