The first Democratic politician in the history of New York with a technology background aims to reintroduce some concepts from the failed California Artificial Intelligence safety bill, SB 1047, with a new version in his state that will regulate the most advanced AI models. It’s called the RAISE Act, which stands for “Responsible AI Safety and Education.” Assembly member Alex Bores is working on the bill, which is currently an unpublished draft (but has been seen by MIT Technology Review) and is subject to change. The bill includes provisions that AI companies must develop safety plans for the development and deployment of their models. It also protects whistleblowers at AI companies and prevents retaliation against employees who share information about AI models that may cause “critical harm.” The safety plans would be audited by a third party, and violations may lead to fines and potential legal action by the Attorney General of New York.
While the bill aims to mitigate catastrophic risks from advanced AI models, some in the AI community believe that it should also address other risks posed by AI models, such as bias, discrimination, and job displacement. The RAISE Act aims to differentiate itself from the failed SB 1047 by proposing different elements, including no creation of a new government body or public cloud computing cluster, and avoiding requirements for a “kill switch” and a definition for “advanced persistent threat.” The bill is geared toward addressing catastrophic risks from frontier AI models. It covers only models that pass a certain threshold for computations required during training, typically measured in FLOPs (floating-point operations).
A New York future Caucus, composed of lawmakers, is discussing how this bill and impending regulation could affect future generations. The bill is in the early stages and may still undergo many edits. Some hope that such regulations are mandated at a national level and believe that states like California and New York are well-positioned to lead the conversation around regulating AI.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…