In his recent work, These Strange New Minds, Chris Summerfield, a cognitive neuroscience professor at Oxford affiliated with Google DeepMind, presents a cartoon of Wile E. Coyote about to fall off a cliff, declaring, “I am writing this book because we have just gone over that cliff.” This represents a shift away from a world where knowledge is solely a human creation.
Knowledge has been crucial in shaping civilizations and inspiring creativity. However, as advanced AI technology emerges, there are concerns regarding their intentions and the possibility of their collaboration. Summerfield is primarily worried about humanity’s grasp of the technology rather than the technology itself. His book seeks to clarify large language models (LLMs) and their capabilities, exploring their similarities to human behavior while also considering the changes in AI, including ethical and philosophical aspects.
Summerfield asserts that while LLMs don’t truly possess consciousness and aren’t likely to pose significant threats, they could exacerbate pre-existing issues. The current dialogue on LLM interactions is quite limited. He claims that acknowledging AI consciousness is subjective and rooted in observable behavior rather than actual experiences, urging people to view AI as tools rather than distinct entities. Engaging with AI should be distinguished from human interaction, as AI’s reasoning may mimic human thought, yet the definition of “thinking” is complicated.
He highlights the dangers of AI-generated propaganda reinforcing existing beliefs and influencing users even more than misinformation does. Though some believe AI could solve social problems, he emphasizes that many challenges require human-focused solutions, as technical problem-solving does not equate to addressing real-world complexities. Summerfield foresees significant societal disruptions due to automation, wealth concentration, and environmental issues stemming from AI’s resource demands, expressing particular concern about autonomous AI systems’ potential to diverge from human values.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…