Home » Business Topics » AI Ethics

AI safety summit: Affect alignment and labor economics workshops

  • David Stephen 
human and artificial intelligence interaction concept. Artificial intelligence system

There are two pillars of human society—affect and intelligence. Just one is principal to being a part of any human group. Though both are required for [social and occupational] functioning, intelligence [because it is sophisticated at most grades] may vary, but affect has to be at least average or more, for any individual to be in a group.

Intelligence and affect are subdivisions of the human mind. Affect is categorized under feelings and emotions, overseeing the capacity for pain, hurt, and shame, to understand consequences for actions, respect, and so forth. Affect may be used against the opposite group, but for an individual’s group, affect must be the guide for acceptance and continuance.

Intelligence is a subdivision of memory, but intelligence is extricated as a pillar of human society because it is outstanding. Human intelligence is extensively formidable to advance the world by the actions of the mind.

Human intelligence can be assumed to be balanced by human affect, since intelligence may not just be used for good, and whatever the results of negative applications of intelligence, affect can be used to understand what it means that others might be experiencing it.

Human affect also ensures that when penalties for wrong actions—in a group—are applied, it becomes a bad, unwanted experience so that the individual avoids it the next time, or others see the example and stay cautious.

If there was no affect, laws would be useless. Intelligence will run riot. Society will not survive. This is an angle to explore AI safety, where a non-organism, without affect, already possesses vast intelligence. AI safety can as well be centered on how to penalize AI models, in ways that they can know they are being penalized when they output something bad—or to refuse misuse, so to speak, since they know that penalty awaits. This will covet the parallel of how human society works to align AI to human values.

The human mind is theorized to have two basic elements—electrical and chemical signals. This means that affect and intelligence are mechanized with similar patterns of interactions of both elements and similar features.

Affect for AI can be implied from theoretical neuroscience to formularize [math] parameters for algorithms, to ensure that as AI advances, there is the possibility for a penalty, within the model, or within general sectionswhen the outputs of the model are used in common areas of the internet, like social media, on the app or play store, on websites [as results] on search engines and so forth.

A key workshop in any AI summit has to explore is theoretical neuroscience for how the human mind balances intelligence with affect for society, then for paths towards something similar for AI, such that regardless of how AI advances  in capability, there is a ring of safety around it, not to setback progress, but for parallel with how society is balanced, to prevent AI from becoming a huge vulnerability, since it processes human intelligence, on a global social and productivity channeldigital

Another workshop is on labor economics, not for universal basic income [UBI] as an option if AI takes jobs, but for economic models that can emerge, with earnings, meaning, or purpose for people, as well as what can be afforded. UBI-lite is already possible with some aspects of public sector work, where nothing much happens. But expanding it, when AI threatens that, would require new explorations in economics.

AI safety summits would have to be ambitious gatherings to navigate research direction toward what already works for human society, predicated on the human mind. There are several safety options that can no longer be described as safety since they have not stopped recent misuses of AI, or some of its other risks. Purposing summits in the directions of theoretical neuroscience and labor economics would mean more value for those gatherings, into safety for society. 

There is a recent article in Reuters, US to convene global AI safety summit in November, stating that “The Biden administration plans to convene a global safety summit on artificial intelligence, it said on Wednesday, as Congress continues to struggle with regulating the technology. Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to “advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence.” There is an announcement from Elysee Palace, Artificial Intelligence Action Summit stating that “On 10 and 11 February 2025, France will host the Artificial Intelligence (AI) Action Summit, gathering Heads of State and Government, leaders of international organizations, CEOs of small and large companies, representatives of academia, non-governmental organizations, artists and members of civil society.”

Leave a Reply

Your email address will not be published. Required fields are marked *