2 Comments

Am I correct in thinking, as a bounding constraint, that GPT3/4 works the best in low precision high recall situations and that the biggest break through is it fully passing the Turing test?

I've asked it technical questions in areas I'm expert level informed and it gets them 75% ish correct.

I ask it technical questions in areas I'm not expert level informed and I think it's probably correct, but suspect given point above I just think it is because I'm not informed enough to know better?

I definitely agree there is lots of productivity to be yielded. All junior devs and now senior devs/code reviewers. Customer service agents move up to tier 3 technical support from 1/2. No need to write first draft of research papers or any type of policy anymore, etc.

I'm personally not a believer of any form of AI intelligence. It ultimately applies advanced calculus/algebra against a large dataset to make decisions. Given this I find these articles of it "escaping" to be rather click baity.

Am I opposed to government regulation? no. Will it help with labour restraint and maintaining workforce productivity into aging/declining labour force? Yes. Is there security risks under bounding constraints defined above? Absolutely. Will it go rogue on its own and start a war? No.

Expand full comment
author

It's not just GPT-3/4. If you look at the difference now vs. 5 years ago, AI's precision has greatly improved in all areas from text, image, audio, video/3d. 5 years ago, the chatbot was still kinda bad but nowadays, sometimes you can't tell it's humans or bots. If the speed continues, who knows what will happen in 5 years? Innovations on modeling along with more powerful compute infrastructure can take us to new territories that we don't know could exist. I am not saying that the AI dominance will certainly happen but it will be foolish for humans to disregard the possibility.

Expand full comment