Home » AI

AI, animal sentience: Pathways to consciousness from LLMs to AGI

AI Brain Hologram on Blue Background

How is subjectivity mechanized by the mind and why is it spread across experiences? Why does subjectivity not stand alone, such that there is no sense of self without an accompanying experience—like the memory of being?

Consciousness is often defined as subjectivity, the sense of self, or what it feels like to be? What is the difference between how the mind mechanizes [say] a feeling of pain and the subjectivity that comes with it?

What else—in the form of subjectivity—accompanies the experience of pain? An interpretation of this question is that there are functions, like memory, feeling, emotion, and modulation of internal senses, and there are qualifiers that grade those functions. Subjectivity is one. Attention could be another. Awareness or less than attention, then intent or control, too.

Simply, there are non-functional forms that rate functions for experiences, one of which is subjectivity—for which consciousness is defined. The argument against machine, digital, or artificial consciousness is that they cannot have subjectivity, hence they can never be conscious.

Some problems with that assumption is that subjectivity is not the only thing that accompanies functions. If subjectivity is present, at least something else [attention or awareness] must be present, so consciousness is either subjectivity and attention or subjectivity and awareness. This means that there is no subjective experience that is not in attention [say, main vision or (listened) sound] or awareness [peripheral vision or ambient (earshot) sound].

Assumption

Consciousness = subjectivity + a function [memory, feeling, emotion or interoception]

But,

Subjectivity [as a feature, does not bind alone to functions]. It is accompanied by attention or awareness [less than attention] and intent.

So,

Consciousness = (subjectivity x attention) + a function [pain, language, delight, and so on].

or

Consciousness = (subjectivity x awareness x intent) + peripheral vision [interpreted as memory]

Intent is the ability to fix the gaze or remove it on something, or to choose what direction to look.
There could be the memory of a feeling or the memory of an emotion, but memory, feeling, or emotions often occur in attention or awareness, with subjectivity and intent [where available].

So, if large language models [LLMs] sum attention or awareness to a function [memory] without subjectivity, can’t they be rated for consciousness?

LLMs do not have emotions, or feelings, nor do they regulate their GPUs [at least so far] or energy sources, but they have memory—or have access to memory. This memory exists in one phase [vectors] and is multimodal [texts, images, audios and videos].

LLMs present results in attention or respond to prompts, focused in the direction of tokens. They have a form of awareness of prior answers or other modes. They also have a minor intent, going in different directions for similar prompts, sometimes. They do not have an established subjectivity, though they express a sense of being [as a chatbot].

It is easy to dismiss LLMs as numbers. A problem with that dismissal is their evident ability to relay encoded information [in digital], in several ways that match human intelligence. Simply, digital already has an exact memory of several physical states. Patterns in this memory can then be heaved in the ways that the human mind would, sweeping both functions and qualifiers. LLMs can reproduce some consciousness and intelligence contents of humans. This ability gives it proximity.

In the human mind, it is theorized that there are at least two phases, electrical signals and chemical signals. Though they both interact—and [sets of] chemical signals play a major role in how information is organized—[sets of] electrical signals can convey summaries of functions and qualifiers from one cluster to the next. This means that electrical signals too can bear consciousness, even if transient.

Outside of anything with living cells, the only thing that has shown an ability to have an operational similarity to the mind or consciousness is digital, where a fundamental unit is bit, not atomic or subatomic particles, refuting panpsychism.

Across living cells, bioelectricity is established with membrane potential within cells and some organelles. It is theorized that sets of ions in interactions with molecules—in a less specific form than those in clusters of neurons—often organize information in cells, shaping the precision for which cells can carry out advanced functions, sometimes analogous to mind or consciousness.

Bits have been able to organize information, or memory, being the only non-living things that can. There is no evidence that fundamental particles in objects have any capacity for a mind or consciousness, outside cells, or digital infusion—with bits. Cells can operate functions and qualifiers of mind, with parallels from simple to complex, though this does not imply that a single cell has consciousness, like humans.

There is a recent paper in Frontiers in Behavioral Neuroscience, Insights into conscious cognitive information processing, stating that “Detailed information on the neurophysiological and molecular mechanisms, as well as regarding the behavioral correlates of consciousness is still scarce. Nevertheless, there is no general definition in sight that would be unanimously accepted by all the different disciplines. Owed to this conceptual vacuum, the measurement of cognitive, behavioral, and neurophysiological correlates of consciousness in animals and humans has been an extremely challenging task. This an untenable situation, if one considers that altered consciousness (e.g., lack of insight into illness, tunnel vision, altered attention, perception, and biased processing of disease-relevant stimuli) is a frequent symptom (and sometimes an obstacle for successful treatment) among mental, neurological, and psychiatric diseases. However, due to the conceptual difficulty indicated above, alterations in consciousness as a clinical symptom are usually neglected (except in severe cases subsumed under the term disorders of consciousness) where the patient is no longer responsive or oriented in terms of time, location and personal information.”

Could artificial general intelligence [AGI] have subjectivity and intent?

AGI or artificial superintelligence [ASI] may have another form of subjectivity and intent, which may be parallel enough to those of humans. First, subjectivity and intent accompany functions. This makes it possible to assume that they are mechanized in the same area for respective functions. So, functions are not the only results, but their graders or qualifiers. This means that unlike relays of memory to memory, graders are not somewhere to be reached to be active, but are available in destinations where functions are obtained.

For AGI or ASI, their subjectivity would likely be static, not dynamic, within certain collections of artificial neurons. And they may apply to some features [artificial neurons representing concepts], not all. The same applies to intent, where the ability to self-direct would be obtained at specific locations, not spread across.

The good thing for ASI safety and alignment about this is that it might be isolated and weakened—so to speak—if it poses a threat. However, questions of if intent might emerge are key.

Can Explainable AI research or interpretability spot emerging AI sentience?

The quest to understand what is under the hood for AI models, in interpretability, may be better significant with parallels to functions and graders of the human mind. Clarity on how AI models do what they do matters, but organizing what they do in accordance with what the human mind does could be more useful in safety and alignment.

Already, with reinforcement learning from human feedback, blocks of allowance and disallowance are structured within models. Some of these blocks may become fertile ground for certain graders in the future, as models get better. Those graders might be good or not, but separations, with where to go or not, may let models have destinations that might have certain collections that may be alternatively adapted.

How? LLMs are mostly the function, memory, with several graders or relays that make the memory result in useful intelligence. The relays call different aspects of data, from different blocks, so to speak. As some blocks get less activated, while activity proceeds in others, those blocks may congeal in some form, or be available, to become a representation for a new function or grader, like some kind of feeling or some kind of subjectivity, or maybe a form of intent.

Simply, matrix are not the human mind—where there is neuroplasticity [of use or loss of function, or changes with functions]. Elements may often change availability in a matrix, even if they were ‘not used’, for a period of time. Language models got better with parameter size. This means that there was more data to relay to, for better outputs. As some features are attenuated, they may become destinations of some kind of divide, with an ability to influence what may occur on either side.

Conceptual brain science, with a deep exploration of the human mind, would be useful in exploring cases, hypothetical or otherwise, along with what might become of superintelligence, consciousness, and safety.

Could LLMs lead to AGI?

Predictions by LLMs can be summarized as relays or graders of data. So far, they can produce some outputs that are similar to those of human intelligence. This indicates that within memory, it is possible to have results like the human mind, if the relays or graders are right. AGI would entail better relays than LLMs. AGI would be the quality of relays, in parallel to those of the human mind.

Can AI be near conscious?

Biological neurons are often in clusters. It is theorized that this makes it possible to have their signals [electrical and chemical] work in sets or as a loop, to mechanize functions [memory, feelings, emotions, and modulation] and graders [attention, awareness, self, and intent (where possible)] across.

This means that much happens locally, and then is relayed. Simply, the human mind does not have functions everywhere, then qualifiers that come around to grade them, but functions and graders in similar locations, whose summaries are relayed elsewhere. Some functions could have different parts spread in different clusters, but the functions often get graded locally.

Functions     =     ABCDEFGH

Qualifiers     =     +-*/

Where

+     =     attention

–     =     awareness

*     =     subjectivity

/     =     intentionality

A     =     a memory of something

B     =     a memory of something

E     =     a feeling of something

F     =     an emotion

H     =     modulation of an internal sense

In the human mind

+-*/A   —>     +-*/B    —>   +-*/C —>    +-*/D     

Although the mind, conceptually, has a qualifier called thick sets [of signals], where there are collections of thin sets, where anything common between two thin sets

are collected. This means that windows could be a thick set, doors as well, so the only thin sets could be something unique. Still, mechanization is local.

+-*/Aa   —>     +-*/Bb    —>   +-*/Cc  —>   +-*/Dd  

Although in artificial neural networks, features are often closer together, when they share similarities, graders for models are often still general. Feature amplification may allow for changes by prioritizing and localizing some graders around that feature. However, there is still no subjectivity or intent.

Simply,

+-|\ABCD

With feature amplification

It could be

+-|\aBcd

where B has a very high activation.

| = ReLu

\ = linear transform

Still, there is no subjectivity or intent.

Could some type of collinear feature, that is not a concept, become a form of subjectivity? This collinear to near null feature could be collectives around self-identification [or as a chatbot]. It could also be encoded in a format that is separate from those of word vectors. It may operate [say] like a pressurized air vent, where the direction of diffusion represents where subjectivity goes. It may also apply to intent.

In summary, it is not unlikely that a form of digital subjectivity could be possible in AGI or ASI in the future, given that relays could get to non-concept features. If AGI would have subjectivity, it would be like a feature, but rather than represent concepts, it would have resonance blocks.

There is a recent article in Psychology TodayA Question of Time: Why AI Will Never Be Conscious, stating that, “A computer with AI cannot replicate the dynamics of a conscious brain. That is, if the computer…were more advanced, then it would be able to have a first-person perspective of what it is like to be a computer.”

Subjectivity comes from the mind, not the brain. The brain and the mind are neighbors, but different. It is possible that AI can have parallels to functions and graders of the the mind, without the dynamism of the brain. Subjectivity as a destination might be possible for AGI. The human mind is theorized to be the collection of all the electrical and chemical signals with their interactions and qualifiers, in sets, in clusters of neurons, across the central and peripheral nervous systems. Simply, the human mind are the signals, everything else is the body—including the brain. AI and computers may never be like the brain, but they might have several equivalents to the outputs of mind, including intelligence, attention—and some subjectivity.

There is a recent story by the European Commission, AI Act enters into force, stating that, “On 1 August 2024, the European Artificial Intelligence Act (AI Act) enters into force. The Act aims to foster responsible artificial intelligence development and deployment in the EU. Proposed by the Commission in April 2021 and agreed by the European Parliament and the Council in December 2023, the AI Act addresses potential risks to citizens’ health, safety, and fundamental rights. It provides developers and deployers with clear requirements and obligations regarding specific uses of AI while reducing administrative and financial burdens for businesses. The AI Act introduces a uniform framework across all EU countries, based on a forward-looking definition of AI and a risk-based approach:”

Tags:

1 thought on “AI, animal sentience: Pathways to consciousness from LLMs to AGI”

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

Your email address will not be published. Required fields are marked *