A couple of weeks ago, the Future of Life Institute published an open letter calling for AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." My first reaction was that The AI ship has sailed. No one can stop it. This letter also came across as authoritarian and patronizing. Who is there to decide who gets to develop LLM models and who doesn't? It was also not clear what can be done during the 6-months pause to make AI development safe and what is the criteria to resume AI development after six months?
AI is a very powerful technology. But humanity has had great inventions like AI before i.e. electrical grid, cars, internet, etc. We didn’t stop the development of cars because people can die in car accidents and horse carriages would be obsolete by cars. In the same vein, while we recognize there could be some risks, we cannot deny the huge benefits of AI. At the moment, AI can already boost white collar workers’ productivity significantly, make education more accessible and help improve health care outcomes. With the continued R&D, it could become even more powerful and beneficial to humanity. We shall find a path to harness the power of AI instead of being afraid of it.
AI heavyweights Andrew Ng and Yann LeCun had a conversation about the AI pause last Friday as shown in the Youtube video above. Like me, they think the pause is a bad idea. Both of them think that AI R&D should continue as the benefits will be huge. But we should work on how AI products could be regulated to minimize the chance of misuse and damage. They also mentioned that while GPT-4 is impressive, it’s still pretty far away from AGI. Yann doesn’t think the autoregressive LLMs can really get us to AGI so it’s realistically still 30-50 years away. They also think the pause is not implementable and if governments pass laws to stop AI development, it will be devastating as AI could really advance humanity. They also expect a democratization/democratization of large LLMs. Namely, there will be many companies and organizations beyond OpenAI to make large LLMs accessible and more competition will help advance the space. The whole conversation was very informative. Please watch the youtube video above if you are concerned about AI being too powerful. Personally I am not too worried about it (yet). People don’t realize how much of the human intelligence is not represented in languages or images and AI still needs to figure out how to pick that up.