The pharmaceutical industry is among the most regulated industries in the United States for the safety and efficacy of therapies. Yet, there are other approaches at the source, aside from regulatory efforts.
Narcan (naloxone) is a medication that reverses opioid overdose, in an example of using drugs to fight the effect of drugs. There are several approved medications to counter the side effects of other approved medications.
Simply, the biological targets that make medications therapeutically useful can also make them work otherwise—or have unwanted effects. This made it necessary not just to have external efforts for compliance, but to have a sort of internally mechanized regulation, with medication-on-medication, for the health and safety of individuals.
AI safety, alignment and regulation would probably have to follow this architecture against risks. There could be AIs that are monitoring other AIs and their outputs within jurisdictions towards achieving approximate safety.
There are several levels of AI threats and risks. However, the ones that are here, now—deepfake videos and images, voice cloning and impersonation, misinformation and disinformation—may require technical combats, beyond just laws, to be effective.
Some people have said that large language models are nothing, yet the harm they cause, for victims and loved ones, shows otherwise. Approaching regulation outside of technical generality may be limited.
In a recent report by The NYTimes, States Take Up A.I. Regulation Amid Federal Standstill, it was stated that, “State lawmakers across the country have proposed nearly 400 new laws on A.I. in recent months, according to the lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds. Colorado recently enacted a comprehensive consumer protection law that requires A.I. companies use “reasonable care” while developing the technology to avoid discrimination, among other issues. In March, the Tennessee legislature passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which protects musicians from having their voice and likenesses used in A.I.-generated content without their explicit consent.”
Computer viruses, bots and bugs were not regulated without fighting them at the source. AI is not the energy industry, neither is it the biotechnology or the airline industry. These industries are based in the physical world, not the digital world—which is more malleable. Doing harm with AI does not require moving things, too sophisticated evasion and so forth, like other industries. AI may also not leave a trail. There are several industries and their products that people can avoid for long stretches of time, but no industry is currently as dominant as the internet—and by extension, digital.
This imperium, for social and productivity applications, makes the human mind process digital outputs like the physical world. A technical approach to AI safety and alignment could be web crawling for indexed AI tools, and then web scraping for their tokens, to track some of the outputs that are used for harm.
The US AI Safety Institute could be a point of contact for states on technical options, especially on how to have jurisdiction AIs against other AIs, as well as to build safe-intent, against existential risks.