The AI Safety Index scored six leading AI companies on their risk evaluation and safety protocols, with Anthropic achieving the highest grade of C. Other companies, including Google DeepMind, Meta, OpenAI, xAI, and Zhipu AI, received lower grades, with Meta failing outright. The purpose of the index is to encourage companies to improve by providing incentives similar to U.S. News and World Reports rankings for universities. The hope is also to aid researchers in the companies’ safety teams. The Future of Life Institute, which released the report, has been focused on AI in recent years and is dedicated to helping humanity avoid bad outcomes from powerful technologies. Despite the lack of pause from the companies after the release of “the pause letter” in 2023, this new report provides further insights into the safety practices of these companies.
The AI Safety Index evaluated the companies on six categories, drew from publicly available information, and included seven independent reviewers with a notable interest in AI safety. Overall, the reviewers were not impressed and noted that the current approach to AI safety is not yet effective and could even be a dead end. Anthropic scored the best overall and the only B- for its work on current harms. The report highlighted that while all companies have declared their intention to build artificial general intelligence (AGI), none have put forth an adequate strategy to ensure alignment with human values. The report’s findings clearly indicate a need for regulatory oversight for AI products before they reach the market. This is necessary due to the current competitive environment preventing companies from slowing down for safety tests.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…