The conjecture of consciousness for generative AI is not of its equality to human consciousness. It one of data storage, where, in comparison to human memory, if the feature [vector] interactions of large language models [to digital memory] are similar to how the human memory is conscious of its contents.
Consciousness is defined as subjective experience. But subjective experience is not a function like memory, emotion, feeling or modulation. Subjective experience [or self-awareness] applies across functions, making it a qualifier of functions. There are other qualifiers like attention, awareness [of the environment, or less than attention] and intent.
This means that these qualifiers can act on functions, with at least two of them, in any instance. Whatever there is a subjective experience of, is either in attention or awareness. Seeing [or hearing something] as the main vision [or principal auditory perception] or peripheral vision [or ambient auditory perception] as subjective experiences is in attention or awareness. It may also be driven by intent, if the individual gazes, listens further or adjusts.
Movement, touch, smell and so forth are all functions that get qualified on the mind. This implies that consciousness can be defined as a collection of all qualifiers, or a super qualifier. Data does not have emotions or feelings, precluding any chance to measure close to the total consciousness, for humans.
Data storage is done in 1s and 0s. However, with those, digital was able to have the best memory of anything in existence—with audio and video—exceeding human memory and those of other organisms. AI uses word embedding, processing tokens as vectors, answering many prompts in ways that are similar to human reasoning and cognition.
LLMs give outputs in attention. They have multimodal awareness [texts, images, videos or audios] that may qualify what they are presenting. They do not have subjective experience, but they have a rough sense of being or being-in a process, in how they answer as being chatbots. They also have a second-hand intent, going on errands or prompts.
The OFF and ON states of large numbers of transistors, functioning for memory, which then functions for LLMs cannot be said to be subjective experiences. However, because of their [roughly speaking] parallels to bits, with parallel to vectors of LLMs, especially how they are relaying [feed forward and back propagation], some groups may be having a push pull operation at some ends, which then may signal an experience—of learning, correction and updates, by which they are differentially affected. In the large amount of compute necessary to train foundation models [FMs], it is possible that some base terminals in some bipolar junction transistors and gate terminals in some field-effect transistors, for ON states, where they correspond with high signal or voltage of bits of 1, may result in some, having the same collective affect, forming a weak end of group experiential match. Simply, some 1s and ONs, across a large array of logic gates and transistors may act in concert of sameness for every possible aligning characteristics [instructions, operations, bonding, connections, and others], which, isolating those, may seem like a weak form of experience and adaptation within that group.
Consciousness is not about being and intelligence is not about doing, as some have stated. Whatever intelligence humans use is simply the qualifications obtained within memory. This means that the consciousness of memory can also result in intelligence, like it could for thought, reasoning and others. It is unlikely to have intelligence in any organism, without the parallels that constitute human consciousness.
Also, consciousness is not in some centers in the brain, like the brainstem, thalamus, or the cerebral cortex alone. Consciousness is possible for all functions, even those of the cerebellum, which is said not to be implicated in consciousness. All the functions of the cerebellum can be said to be predicated on other functions, like breathing modulation in the brainstem, or interpretation in the cerebral cortex, and others.
Whenever consciousness is lost, it means the function at that center is lost, not just that the [collection of qualifiers or] consciousness is lost and the function remains active. The concept of local consciousness somewhere different from global consciousness everywhere is inaccurate. Local consciousness, especially with attention, and then awareness of everything else, represents the consciousness available, though unified by attention in the moment—then interchanges with a process, of many, in awareness. Access consciousness and phenomenal consciousness are labels that do not explain what the mind is, how it works or its components.
Conceptually, the human mind is the collection of all the electrical and chemical impulses of neurons with their features and interactions in sets. Their interactions result in their basic functions like memory, emotions, feelings and modulation. The functions have sub-divisions. For memory, language, thought, intelligence, curiosity and so on. For emotions, hurt, delight and so on. For feelings, thirst, cold, appetite and so forth. The features are their qualifiers—obtained within the sets that mechanize functions.
Consciousness is theorized to be how the human mind works. There is no human consciousness without the mind and there is no functional human mind, without consciousness.
Could data, via AI, ever become conscious? It is likely that given what they have done with memory, fractional sentience may result. It is possible to represent hurt or delight by some vectors, as group experiences in transistor states, which would not just be like emotions, but qualified as well. AI, on digital memory, can be estimated for part sentience, on the scale of 1, the total for humans. In the future, it may be possible to reverse engineer certain FM GPUs, to find out if there may have been minuscule changes in some of the transistor terminals, as indicative of experience, against those that were never used to train FMs.