Top 5 This Week

Related Posts

Google Bard advert shows new AI search tool making a factual error

An advert for Google Bard, the tech giant’s experimental conversational AI, inadvertently shows the tool providing a factually inaccurate response to a query.

It is evidence that the move to use artificial intelligence chatbots like this to provide results for web searches is happening too fast, says Carissa Véliz at the University of Oxford. “The possibilities for creating misinformation on a mass scale are huge,” she says.

Google announced this week that it was launching an AI called Bard that will be integrated into its search engine after a testing phase, providing users with a bespoke written response to their query rather than a list of relevant websites. Chinese search engine Baidu has also announced plans for a similar project, and on 7 February, Microsoft launched its own AI results service for its Bing search engine.

Read more:

AI search heats up as Google and Baidu race to launch ChatGPT rivals

Experts have warned New Scientist that there is a risk such AI chatbots could give inaccurate responses as if they were fact, because they craft their output based on the statistical availability of information rather than accuracy.

Now an advert on Twitter from Google has shown Bard responding to the query “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” with incorrect results (see image, below).

The third suggestion given by Bard was “JWST took the very first pictures of a planet outside of our own solar system”. But Grant Tremblay at the Harvard–Smithsonian Center for Astrophysics pointed out that this wasn’t true.

“I’m sure Bard will be impressive, but for the record: JWST did not take “the very first image of a planet outside our solar system”. the first image was instead done by Chauvin et al. (2004) with the VLT/NACO using adaptive optics,” he wrote on Twitter.

“Ironically, if you actually search ‘what is the first image of an exoplanet’ on the original Google, the old-school Google, it gives you the correct answer. And so it’s funny that Google, in rolling out their huge multibillion dollar play into this new space, didn’t fact check on their own website,” Tremblay told New Scientist.

Read more:

How AI chatbots in search engines will completely change the internet

Bruce Macintosh, the director of the University of California Observatories and part of the team that took the first images of exoplanets, also noticed the error, writing on Twitter: “Speaking as someone who imaged an exoplanet 14 years before JWST was launched, it feels like you should find a better example?”

Véliz says the error, and the way it slipped through the system, is a prescient example of the danger of relying on AI models when accuracy is important.

“It perfectly shows the most important weakness of statistical systems. These systems are designed to give plausible answers, depending on statistical analysis – they’re not designed to give out truthful answers,” she says.

“We’re definitely not ready for what’s coming. Companies have a financial interest in being the first ones to develop or to implement certain kinds of systems, and they’re just rushing through it,” says Véliz. “So we’re not giving society time to talk about it and to think about it and they’re not even thinking about it very carefully themselves, as is obvious by the example of this ad.”

A Google spokesperson told New Scientist: “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”

Popular Articles