Google finally announced their ChatGPT rival Bard powered by their large language models LaMDA. But unfortunately Bard got the facts wrong. When asked about “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” One of the answers said “JWST took the very first picture of a planet outside of our own solar system. ” , which unfortunately is incorrect as the first picture was taken in 2004 per NASA.
This is a pretty common problem for ChatGPT too. ChatGPT answers my questions with so much confidence that I feel like trusting it. The problem is ChatGPT is wrong a lot. I suppose like humans, these chatbots are pretty susceptible to misinformation. I have been using ChatGPT to do a lot of technical work. Very often its answers are factually incorrect. For example, I asked ChatGPT a redshift question as shown below. The “DESCRIBE my_table'' answer is incorrect. My general conclusions is that ChatGPT is great at bulls**ting and a lot of times people don’t care if what it’s saying is factually correct. They only care if ChatGPT sounds eloquent and knowledgeable. But if I need a factually correct answer, I have to separately come up with ways to verify its claims. It still makes me a lot more productive as a programmer but I think I can keep my job for a little while. I do hope these chatbots can do better so I can fully retire from being a programmer.