Author: Thom Holwerda
Bing AI did a great job of creating media hype, but their product is no better than Google’s Bard. At least as far as we can tell from the limited information we have about both. I am shocked that the Bing team created this pre-recorded demo filled with inaccurate information, and confidently presented it to the world as if it were good. I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence. Bing AI is incapable of extracting accurate numbers from a document, and confidently makes up information even when it claims to have sources. It is definitely not ready for launch, and should not be used by anyone who wants an accurate model of reality. Tools like ChatGPT are fun novelties, and there’s definitely interesting technology underpinning them, but they are so clearly not very good at what they’re supposed to be good at. It is entirely irresponsible of Microsoft, OpenAI, and Google to throw these alpha versions out there where the Facebook boomers can find them. Have they learned nothing from social media and its deeply corrupting influence on the general population’s ability to separate truth from fiction? And now we have “artificial intelligences” telling these very same gullible people flat-out lies as truth, presented in a way that gives these lies even more of a veneer of reliability and trustworthiness than a tweet or Facebook post ever did? These tools are going to lead to a brand new wave of misinformation and lies, and society is going to pay the price. Again.