Home » Sector Topics » AI in Government

California SB 1047: Limitations of regulation for AI safety, governance, and alignment

  • David Stephen 
AI ethical concept with businessman holding balance justice to encourage ai user no patent infringement and copy right of human artist in business activity

If a state wants to ensure very clean air—most of the time—regulation is an approach to achieving that outcome. Litigation is also on the table for violations that regulation may not cover. These alone would give some levels of success like pharmaceutical regulation with clinical trials and then litigation for certain adverse effects.

However, it may still not be possible to achieve the objective with regulation and litigation, just like pharmaceutical health is not absolute from regulation and litigation. Air diffuses easily and it is not always domain-specific. So, while it is possible to use the power of the state, much more is possible against clean air from within—surreptitiously, and from without—intentionally or inadvertently.

There have been several regulations and litigation around the internet in the last three decades. This has helped in several regards. Yet, piracy is abundant, misuses are rife, losses of different forms are common, internal and external exposures result in vulnerabilities across networks and much more.

Digital, like air, is not [say] seatbelt or red light—with physical [and open] solid phases. Digital and air are physical forms but different from others. Digital is even farther than air in physical remoteness, presenting difficulty in how much can be done by the law—and society against its harms.

Aside from digital uniqueness, limitations of regulation and litigation are also sometimes about science or advancement. Seatbelts are vital—mitigating losses—but they have not stopped car crashes, even though that was not the purpose—so to speak. Assuming there were some advances that stretched the solution of seatbelts farther, maybe against certain speeds, driving states and so forth, they may have been useful to tapering car crashes centrally. They work and are necessary to save lives, just that against many causative factors, they have some constraints.

Simply, there is also an advancement problem that may limit the impact of regulation and litigation. Pharmaceutical health, for example, is also better because there are medications that can reverse the effects of others. So, regulation is not only by the law, or external, but internal—by biology and chemistry.

This would have been an approach for AI safety and alignment rather than a regulation that would repeat conspicuous flaws. Digital has already shown that evasive possibilities abound for activities against the law—within and without jurisdiction. Problems with digital may not even be obvious, initially. For digital, regulation is easy, and enforcement is somewhat easy, but efficiency is hard.

Any efficient AI regulation would probably have technical research parallel to work on possible approaches against current misuses and potential ones. This technical research would then explore why those misuses are currently possible and the sources of those or likely sources in the future, near or far.

Simply, technical regulation is research on technical means for safety and alignment—engineering into the modes of misuse and emerging threats in the future. The State of California would have set up its own AI safety institute, working on novel approaches for AI safety, while exploring regulatory angles that would cover most of the misuses that are likely to be coy, like misinformation and multimodal deepfakes—audio, video, and images.

The US and the UK have AI safety institutes. The EU also has an AI office. It would be too simple to expect that going after major platforms would do the trick, especially as they would be ready to comply in the ways that are expected of them, also knowing how exposed they might be to litigation and brand smudge. Simply, while they can be regulated like biotech, achieving AI safety and alignment would include working out new ways to take on possibilities against risks and threats—at their provenance.

Technical research in different directions including the human mind, as well as penalties for large language models, web crawling and scraping, seeking out non-concept features and much more could be added to ways to explore broad coverage for safety within any jurisdiction. Regulation—and litigation—would try, but would leave too many gaps that they cannot touch, protecting little.

There is a recent article on TechCrunch, OpenAI’s opposition to California’s AI bill ‘makes no sense,’ says state senator, stating that, “The AI revolution is only just beginning, and California’s unique status as the global leader in AI is fueling the state’s economic dynamism. SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunities elsewhere. Given those risks, we must protect America’s AI edge with a set of federal policies — rather than state ones — that can provide clarity and certainty for AI labs and developers while also preserving public safety.”

There is a recent article in The Atlantic, Silicon Valley Is Coming Out in Force Against an AI-Safety Bill, stating that, “S.B. 1047 subjects any AI model that costs more than $100 million to train to a number of safety regulations. Under the proposed law, the companies that make such models would have to submit a plan describing their protocols for managing the risk and agree to annual third-party audits, and they would have to be able to turn the technology off at any time—essentially instituting a kill switch. AI companies could face fines if their technology causes “critical harm.” The bill, which is set to be voted on in the coming days, has encountered intense resistance.”

Leave a Reply

Your email address will not be published. Required fields are marked *