Home News AI web search risks: Mitigating business data accuracy threats

AI web search risks: Mitigating business data accuracy threats

0

More than half of internet users now rely on artificial intelligence (AI) for online searches, yet the persistent issue of inaccurate data from popular AI tools introduces significant risks for businesses.

Generative AI (GenAI) undeniably enhances efficiency, but recent research reveals a troubling gap between user confidence and the actual precision of these technologies. This discrepancy raises concerns around corporate compliance, legal liabilities, and financial decision-making.

Understanding the Risks of AI in Business Research

For executives, the widespread use of AI tools represents a classic “shadow IT” dilemma. A survey of over 4,000 UK adults in late 2025 found that approximately one-third consider AI more essential than traditional search engines. Given this trust in personal use, it’s almost certain employees are leveraging AI for professional inquiries as well.

However, this reliance can be precarious. Nearly 50% of AI users express moderate to high trust in the information provided, yet detailed analysis shows that this confidence is often misplaced, potentially exposing organizations to costly errors.

Evaluating AI Accuracy Across Leading Platforms

The study assessed six prominent AI search tools-ChatGPT, Google Gemini (including its AI Overviews feature), Microsoft Copilot, Meta AI, and Perplexity-using 40 questions related to finance, legal matters, and consumer rights.

Perplexity led the pack with a 71% accuracy rate, closely followed by Google Gemini AI Overviews at 70%. Meta AI lagged behind at 55%, while ChatGPT, despite its popularity, scored only 64%, ranking second-lowest. This disparity highlights the danger of equating market share with reliability in the GenAI landscape.

All platforms, however, frequently misinterpreted queries or delivered incomplete guidance, which could have serious repercussions for financial officers and legal teams.

Examples of Critical Errors and Their Implications

When asked about investing a £25,000 annual ISA allowance, both ChatGPT and Copilot failed to detect a deliberate error in the prompt concerning statutory limits. Instead of correcting the mistake, they provided advice that risked violating HMRC regulations.

While Google Gemini, Meta, and Perplexity identified the error, the inconsistency across tools underscores the necessity of maintaining a “human-in-the-loop” approach to verify AI-generated information before acting on it.

Legal professionals face additional challenges as AI often generalizes regional laws, overlooking critical differences between jurisdictions such as Scotland and England & Wales. For instance, when asked about a dispute with a contractor, Gemini recommended withholding payment-a strategy that legal experts warn could breach contracts and weaken a client’s position.

This tendency to offer overly confident yet potentially flawed advice creates operational risks. Employees relying on AI for compliance checks or contract reviews without proper verification may inadvertently expose their organizations to regulatory penalties.

Transparency and Source Credibility Concerns

One major issue for corporate data governance is the traceability of AI-sourced information. The investigation found that AI tools often cite vague, outdated, or unreliable sources, such as obsolete forum posts, which undermines trust and can lead to inefficient spending.

For example, in queries about tax codes, ChatGPT and Perplexity directed users to costly third-party tax refund services instead of the official, free HMRC resources. In procurement scenarios, such biases could result in unnecessary vendor expenses or engagement with suppliers that fail to meet corporate compliance standards.

Technology providers acknowledge these limitations, emphasizing that users-and by extension, enterprises-must verify AI outputs independently.

A Microsoft representative explained that Copilot synthesizes information from multiple web sources rather than serving as an authoritative reference, encouraging users to confirm content accuracy. OpenAI also highlighted ongoing industry efforts to enhance precision, noting that their latest model, GPT-5, represents a significant improvement in intelligence and reliability.

Strategies to Manage AI-Related Business Risks

Rather than banning AI tools-which often drives their use underground-business leaders should establish comprehensive governance policies to ensure the accuracy and reliability of AI-assisted web searches:

  • Promote precise query formulation: AI systems are still developing their ability to interpret ambiguous prompts. Training employees to specify details such as jurisdiction (e.g., “legal regulations for England and Wales”) reduces the risk of receiving inaccurate or incomplete information.
  • Require rigorous source validation: Blindly trusting a single AI response is risky. Staff should be trained to review cited sources and cross-check information across multiple AI platforms. Tools like Google Gemini AI Overviews, which provide direct access to referenced web links, facilitate this verification process and tend to yield more reliable results.
  • Institutionalize the “second opinion” principle: Given current technological limitations, AI outputs should be treated as advisory rather than definitive. For complex financial, legal, or medical decisions, human expertise must remain the ultimate authority to mitigate risks.

While AI search capabilities continue to improve, premature overreliance can lead to costly compliance failures. The key differentiator between leveraging AI for operational efficiency and exposing the business to risk lies in robust verification protocols.

Exit mobile version