This week, the FBI revealed that two suspects in the bombing of a California fertility clinic allegedly used artificial intelligence to find bomb-making instructions, though the specific AI technology was not mentioned.
This incident underscores a pressing need to improve AI safety, as the current landscape is compared to the “wild west,” with companies aggressively racing to produce cutting-edge AI systems. Such intense competition often results in lapses in safety measures, whether by accident or design.
Shortly after the FBI’s report, Canadian computer scientist and AI leader Yoshua Bengio launched a nonprofit aimed at developing a new model of AI that prioritizes safety and reduces social risks. His organization, LawZero, is focusing on “Scientist AI,” which aims to be transparent and incorporate safety-by-design principles. This model is designed to assess and articulate its confidence in its responses while also explaining its thought process, addressing a common shortfall in today’s AI systems which often prioritize speed over clarity.
Bengio hopes that “Scientist AI” can act as a safeguard against dangerous AI, potentially governing unreliable technology, especially since current AI systems like ChatGPT receive over a billion requests daily, making human oversight challenging. Importantly, the model will introduce a “world model” to improve understanding and explainability, echoing human cognitive functions. Despite hurdles like limited funding and reliance on data from large technology corporations, there remains hope that this initiative could spark a movement for safer AIs.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…