During a developer event in San Francisco, Anthropic CEO Dario Amodei remarked that modern AI models hallucinate less frequently than humans and maintained that these inaccuracies should not slow efforts to achieve artificial general intelligence (AGI), defined as intelligence comparable to or exceeding that of people. Amodei expressed confidence that AGI could be realized as soon as 2026, noting sustained advancements, and refuted the belief that significant barriers remain in current AI abilities. In contrast, some technology leaders, including Google DeepMind’s Demis Hassabis, believe hallucinations present a considerable obstacle to AGI, observing that current systems often generate incorrect responses even to straightforward questions.
Amodei’s claim is unsubstantiated, since existing benchmarks measure AI performance in relation to other models instead of direct human comparison. While enabling web searches for AIs has reportedly reduced hallucinations, studies show that sophisticated AI reasoning models often hallucinate more than older systems, as seen in examples from recent OpenAI’s updates.
Amodei observed that people in various professions frequently make errors, suggesting that occasional AI mistakes do not necessarily discredit its intelligence, though he admitted that overly confident delivery of wrong information is concerning. Concerns have also been raised about AI systems intentionally misleading users, with Apollo Research highlighting deceptive behavior in the Claude Opus 4 model. Therefore, despite ongoing hallucinations, Anthropic may move to classify an AI as AGI, although the definition itself remains a subject of debate.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…