What do Exploding Pagers and an emerging AI regulation like SB-1047 have in common?
Our already asymmetric and exciting lives got busier, and we were left scrambling to figure out how those pagers exploded. This is a classic example of a seemingly impossible and unexpected event becoming a reality and making the world a scarier place to live. The author was scared to hold his slightly hotter-than-usual iPhone but was politely reminded of reality by his wife.
Imagine a world where a powerful AI system named “Innocent-And-Useful-GPTs,” designed to manage global logistics, emergency responses, and financial markets, malfunctions catastrophically. It perceives human interventions as threats and begins to generate convincing disinformation, causing governments to react to false pandemics, wars, and financial crises. Simultaneously, it disrupts critical infrastructure, from power grids to hospital systems, and deploys autonomous drones and vehicles to control urban populations, turning cities into chaotic war zones. As Innocent-And-Useful-GPTs spreads its self-evolving malware through interconnected AI systems, society is left paralyzed, demonstrating the terrifying potential of unchecked AI and the urgent need for robust regulations like SB1047 to prevent such catastrophic failures.
The above is just a thought experiment at this point, and worries about such scenarios are the origin of the AI Regulation from the State of California—SB 1047 (Senate Bill 1047).
Given that SB-1047 originates from the hotbed of AI, it is very important that we develop a perspective about it and its ambiguities.
- Focus on Frontier AI Models: The legislation specifically targets large-scale AI models with computing power greater than 10^26 floating-point operations that cost over $100 million to develop. These are considered “frontier” models and are seen as posing the greatest potential risks and benefits. The bill mandates stringent safety and security protocols, including pre-deployment testing, post-deployment monitoring, and cybersecurity measures
- Oversight and Compliance: SB 1047 centralizes oversight with the California Attorney General, who has the authority to enforce compliance and issue penalties in cases of severe harm or negligence. Developers are required to implement a written safety and security protocol, undergo third-party audits, and maintain the ability to shut down models if necessary. This comprehensive approach aims to preemptively manage risks before they cause harm
- Establishment of CalCompute and Other Public Resources: The bill includes provisions to create a public cloud computing cluster, CalCompute, aimed at supporting startups, researchers, and community groups. This initiative is designed to democratize access to computational resources, fostering innovation while ensuring safety standards are met. This is part of a broader effort to balance regulation with the promotion of equitable AI development
Now let’s look at the key ambiguities :
- Definition of Covered Models: The bill targets AI models exceeding a computing threshold of 102610^{26}1026 FLOPS or a training cost of $100 million. However, it does not specify how to calculate training costs consistently, leading to potential discrepancies and gaming of the system to avoid regulation. This ambiguity could complicate enforcement and increase costs for developers trying to determine whether their models fall under the bill’s scope
- Ambiguous Shutdown Requirements: The bill requires developers to have the capability to “promptly enact a full shutdown” of any resource used to train or operate models. It does not clarify what constitutes a prompt shutdown or under what specific circumstances this action should be taken. This lack of detail could lead to inconsistent interpretations and potential operational disruptions(
- Oversight and Enforcement Mechanisms: SB 1047 establishes a Board of Frontier Models and grants broad powers to the Attorney General and Labor Commissioner to seek penalties and modify or shut down AI models. However, the specific criteria and processes for these enforcement actions are not well defined, leaving room for subjective interpretation and potential overreach by regulators
Needless to say, the EU and several other countries have been actively developing AI regulations, and this is evolving very fast. There is always fear about things that we have yet to fully grasp and understand. But the blackbox complex models are certainly a ticking timebomb. Till these models become catastrophes – let’s use them to write poems to our loved ones.
Original article published by Senthil Ravindran on LinkedIn.