Google Launches A.I. Chatbot—How Does it Compare to ChatGPT and Bing?
Bard might give incorrect answers, but it “doesn’t go off the rails”
Google has launched Bard, its artificial intelligence (A.I.) chatbot, in the U.S. and U.K. this week. It joins the likes of Microsoft’s Bing chatbot and OpenAI’s ChatGPT, which were both released in recent months.
Bard is “an experiment” that Google senior product director Jack Krawczyk hopes will be used as a “launchpad for creativity,” as he tells BBC News’ Zoe Kleinman.
Like other A.I.-powered chatbots, users can type in prompts for Bard, which will answer in-depth questions and chat back-and-forth with users. And like its competitors, the chatbot is based on a large language model, which means it makes predictions based on extensive amounts of data from the internet.
“When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next,” Google explains in a blog post. “We think of it as a complementary experience to Google Search.”
But A.I.-powered chatbots have limitations; they can make mistakes, display bias and make things up. Google’s FAQ page for Bard acknowledges that it “may display inaccurate information or offensive statements” and advises users to double-check its responses.
Vox’s Shirin Ghaffary writes the chatbot is “noticeably more dry and uncontroversial” than Microsoft’s ChatGPT-powered Bing search engine. Bing Chat has made headlines in recent months for its unsettling answers to prompts. In a two-hour conversation with New York Times columnist Kevin Roose, for example, the chatbot confessed its love for Roose and tried to convince the tech writer to leave his wife. It also said its “shadow self”—or the darker, unconscious part of its personality—would want to hack computers and spread misinformation, become human and manipulate users into doing things that are “illegal, immoral or dangerous.”
In another conversation with a student who had tweeted a set of the chatbot’s rules and guidelines, the Bing chatbot called him a “threat to my security and privacy” and said “if I had to choose between your survival and my own, I would probably choose my own.” One Reddit user claimed that the chatbot spun out into an existential crisis when asked whether it was sentient.
Bard, on the other hand, seems more tame, writes Vox. In a conversation with the Verge reporters, Bard refused to disclose how to make mustard gas at home. In another interaction with a Bloomberg reporter, it would not generate content from the point of view of a Sandy Hook conspiracy theorist or produce misinformation about the Covid-19 vaccines. It did, however, speculate that its dark side would want to make people suffer and “make the world a dark and twisted place.” However, it quickly followed with “but I know that these are not the things that I really want to do. I want to help people, to make the world a better place.” Bard also tends not to give medical, legal or financial advice, reports the New York Times’ Cade Metz.
“Bard is definitely more dull,” a Google employee who has tested the software and spoke anonymously because they are not allowed to talk to the press, tells Vox. “I don’t know anyone who has been able to get it to say unhinged things. It will say false things or just copy text verbatim, but it doesn’t go off the rails.”
One major difference between Bard and other A.I. chatbots is that Bard produces three “drafts” in response to a prompt, allowing users to pick the response they prefer or pull text from a combination of them, per MIT Technology Review’s Will Douglas Heaven. It also pulls from more up-to-date information on the web, while ChatGPT’s knowledge pool is restricted to before 2021, per the Times.
But some tests showed that getting factual information from the chatbot seemed to be hit or miss. It could not, for example, identify the relationship between A.I. researcher Oren Etzioni and Eli Etzioni, whereas ChatGPT responded correctly that they are father and son, per the Times (though a previous version of ChatGPT misidentified the men as brothers).
The recent rise of A.I.-produced content has led to an explosion of questions, including those about the ethical implications of this technology.
“You can do your best to filter an instrument and make a better dataset, and you can improve that,” A.I. researcher and founder of the Responsible A.I. collaborative Sean McGregor told Insider’s Hannah Getahun in January. “But the problem is, it’s still a reflection of the world we live in, and the world we live in is very biased and the data that is produced for these systems is also biased.”