4/17/2023: Agriculture, Fire, Electricity and AI
AI will change every industry and every company
Have you seen the AI clip on 60 minutes this past weekend (linked above)? Google CEO and his top execs in the AI divisions were interviewed. It was entertaining and thought provoking. The interviewer appears to be so stunned by the AI’s capabilities that I don’t know if it’s acting or real. He is a grandpa after all so I supposed he was truly shocked. The interviewer made a big deal about how Bard seems to be sentient and can write moving stories. The Google AI scientist tried to debunk it by saying that the AI simply mimics human behavior. The interviewer was also really impressed that a robot can teach itself to play games better and better. Well, that is what we call reinforcement learning. RL works great for playing games that have well defined rules. But I am not sure if self-teaching can work so well in real-life games like politics at the moment. I feel the 60-minute episode exaggerates things quite a bit but Google top shots did make a few interesting points as summarized below:
AI will change every company and every industry. Google CEO said AI is like agriculture, fire and electricity that will profoundly change humanity. No human can read and remember every book and every article ever existed but AI can.
Knowledge workers will be replaced. At present, knowledge workers can be greatly empowered by using AI as a powerful assistant. But in the future, humans probably don’t even need to get involved to get a lot of tasks done. It is going to change people’s identity in profound ways.
AI hallucinations: The current LLM models make things up confidently. In the 60-minute program, Bard recommended 5 books that don’t exist. I suppose this is a technical issue that could be addressed sooner or later.
Disinformation: AI can mass produce deep fake videos and misleading articles in a scale that we haven’t seen before. Fighting disinformation is going to be difficult. I suppose we will need to fight AI disinformation with AI down the road.
Google CEO said AI has capabilities they don’t fully understand!!: It’s a blackbox even to the researchers. He gave an example of AI being able to translate Bengali with very little data fed into the model. More about this here.
DeepMind CEO thinks that humans can adapt to AI like how we adapt to the internet and mobile phones. AI is a monumental change to humanity and its capabilities are unsettling but he believes that AI won’t diminish humanity. Instead, it will elevate humanity in an identity shifting way. I feel the impact of AI is going to be greater than the internet or mobile phones as many knowledge workers will feel lost due to their job being displaced by AI. How the knowledge workers reclaim their identity will become a big philosophical challenge down the road.
Google CEO at the end asked for regulations so AI can be deployed safely. He said Google deliberately rolled out its AI slowly to fix potential safety issues. He also thinks that in addition to engineers, social scientists, philosophers and all stakeholders should give input on how AI can be regulated and aligned to consensual human values. I think this is going to be really hard to do. We might even need AI to guide us through the political process as there are uncertainties on the level of intelligence/self-awareness AI can really achieve and different opinions on what aligning with human value really means.
Overall, the rapid AI development definitely induces a lot of anxiety among intellectuals. I was a machine learning engineer and have tried hard to keep up with literature even before the ChatGPT frenzy. At the individual level, I feel quite uncertain about my future and my children’s future. Working hard and being productive simply is not going to cut it any more because AI is going to be 1000000X more productive than any human. At the species level, I believe AI will create tremendous wealth that can benefit us all. But we have to make sure that the wealth generated by AI is distributed equitably. Optimistically, the abundance created by AI distributed equitably could enable humans to find new identities that transcend the existing scarcity-based power structure and lead us to a new age of enlightenment. But humans can also totally screw this up if only a small number of elites or AI itself end up using AI to hoard resources and dictate how other people live, which will make our world even more unequal and dystopian. The choices made by us humans collectively in the coming decade will determine where we are heading and I truly hope we are on the right track.
Am I correct in thinking, as a bounding constraint, that GPT3/4 works the best in low precision high recall situations and that the biggest break through is it fully passing the Turing test?
I've asked it technical questions in areas I'm expert level informed and it gets them 75% ish correct.
I ask it technical questions in areas I'm not expert level informed and I think it's probably correct, but suspect given point above I just think it is because I'm not informed enough to know better?
I definitely agree there is lots of productivity to be yielded. All junior devs and now senior devs/code reviewers. Customer service agents move up to tier 3 technical support from 1/2. No need to write first draft of research papers or any type of policy anymore, etc.
I'm personally not a believer of any form of AI intelligence. It ultimately applies advanced calculus/algebra against a large dataset to make decisions. Given this I find these articles of it "escaping" to be rather click baity.
Am I opposed to government regulation? no. Will it help with labour restraint and maintaining workforce productivity into aging/declining labour force? Yes. Is there security risks under bounding constraints defined above? Absolutely. Will it go rogue on its own and start a war? No.