Building Successful AI Search Tools Requires Establishing Trust: How It Remains Elusive

Building Successful AI Search Tools Requires Establishing Trust: How It Remains Elusive

Jeffrey Lv12

Building Successful AI Search Tools Requires Establishing Trust: How It Remains Elusive

Key Takeaways

  • Generative AI search engines may offer helpful answers, but trust is a significant issue due to unpredictability.
  • Human oversight remains crucial in search results to ensure trust and accuracy.
  • Even with advanced AI, trust in technology matters more than performance numbers.

Generative AI, at least at the cutting edge, is effective at finding, digesting, and presenting information in a useful way for those looking for answers. However, can we really trust this technology as it stands? Could we, in principle, ever trust it?

Generative AI Search Engines Are Giving Worrying Results

Everyone seems to be adding AI to their search products. Some, such as Perplexity AI , have built their entire system around generative AI and Large Language Models (LLMs). Established search outfits, notably Bing and Google, have added these technologies to their existing search engines.

For example, Google now offers AI summaries where the LLM will read web pages so you don’t have to, and give you a direct answer to your question. Setting aside how this cuts the actual websites that produce the content out of the loop, when it works it’s amazingly useful. However, just like AI chatbots are prone to suffering “hallucinations “ or making logic and reasoning errors, AI search can fall prey to the same issues.

This has led to the Google search AI suggesting things like adding glue to pizza and eating rocks , and that there are cats on the moon . Google, for its part, is rapidly moving to prune these errors , but that doesn’t address the fundamental problem—trust.

https://techidaily.com

Someone Has to Be Responsible for Information in Search Results

Believe it or not, I am a real human being sitting at this keyboard, typing words that you’re going to read. That means if you read something here that doesn’t make sense, or is false, or could cause harm, you have someone who can take responsibility for it. Likewise, if I use information from, for example, the Associated Press, and there are serious factual errors in that material, I have recourse because a human being somewhere down the line made a mistake or something else went wrong to explain the issue.

We know how the information was produced, and we know who was responsible. There are consequences for being negligent, and there are systems in place (such as human editors and editorial polices) meant to catch problems before they’re published. Of course, none of these systems are foolproof, but someone is always responsible.

So far, generative AI systems are prone to behaving in ways that even their creators can’t predict. These billion- or even trillion- parameter models are like black boxes where no one precisely knows what’s going on inside. That’s also why they hold so much promise and power, but it means making a safe and trustworthy solution is hard.

Even of something like Google’s Gemini was 99% accurate and trustworthy, we still have an issue. Of course, humans are rarely that reliable ourselves, but we also rarely have the speed and scale for our errors to cause potentially massive damage. Consider that Google handles around 8.5 billion searches per day . If only 1% of those searches contained potentially harmful information or advice, that still leaves you with 85 million people being fed something problematic. Every day!

In other words, the standards required for systems like these to offer advice and information without any human in the loop is so high, that it may in fact be impossible.

https://techidaily.com

If People Don’t Trust AI Search Results, Tech Doesn’t Matter

However, we can talk about the performance numbers of these AI systems all day, and it won’t make any difference if they aren’t perceived as trustworthy. There are many promising technologies that have been mothballed thanks to a lack of public trust. Nuclear accidents have made people wary of nuclear energy. The Hindenburg tragedy put the last nail in the coffin for airships, even though changing to helium from hydrogen would make a repeat of it impossible. Once your technology is perceived as untrustworthy, it can be nearly impossible to correct that perception, even if you actually fix the issues.

https://techidaily.com

Keeping Humans in the Loop Is Inevitable No Matter How Smart Machines Get

No matter how good AI technologies get, they can never be good enough to justify removing human oversight completely. Not because humans will necessarily do a better job, but because I believe humans will never completely trust them. We tend to focus on the exceptions rather than the rule, which is why people play the lottery!

Ultimately, creators of systems like these can’t wash their hands of responsibility when those systems cause harm. The buck has to stop somewhere, and humans will have to be responsible in some way. Individual websites can be held responsible for their content, and until revently search engines have done a good job of putting the most trustworthy sites at the top of the list. It’s a virtuous cycle where we try to provide the highest quality, trustworthy content we can, and search engines reward us with traffic, which funds more content, and so on.

Now, AI summaries of that content run the risk of breaking the cycle that produces quality, trustworthy content while also potentially introducing or repeating dangerous or misleading information. Once you effectively turn finding information on the web into a game of Russian roullette, don’t expect people knocking down your door to be willing players.

Also read:

  • Title: Building Successful AI Search Tools Requires Establishing Trust: How It Remains Elusive
  • Author: Jeffrey
  • Created at : 2024-11-13 23:38:53
  • Updated at : 2024-11-18 19:46:05
  • Link: https://tech-haven.techidaily.com/building-successful-ai-search-tools-requires-establishing-trust-how-it-remains-elusive/
  • License: This work is licensed under CC BY-NC-SA 4.0.