In October 2022, the White House Office of Science and Technology Policy published “The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People”. This attention from our government given to what could be called an AI EQ (emotional quotient) is reminiscent of how-to parent or raise a child. This focus on AI EQ rather than AI IQ gives credence to AI’s potential capabilities in steering human destiny in a good or bad direction. R2-D2, the artificially intelligent droid from the movie Star Wars, had what you might consider a high AI EQ and IQ. R2-D2 showed it’s AI IQ by various ways, such as tapping into the security network aboard the Death Star and assisting Luke in piloting his X-Wing spacecraft during a fight but also showed R2-D2 ‘s high AI EQ component of courage and loyalty. Most importantly, R2-D2 was mature enough to form both a machine-machine relationship and a human-machine relationship and showed he was influenced and learned from its surroundings. Like humans, AI can be a product of their environment and experiences and shape the way they perceive the world through their innate learning capabilities. I have narrowed down 5 potential near future AI events involving how to parent and help raise it, potential influences on its development and how it may directly help humans in the real world.
Regulated Certification process
As AI advances from Generative AI (ChatGPT) to “Real World” AI, that is integrated into critical components of society (i.e., self-driving cars and industrial control systems (ICS)) a steep increase in potentially dangerous outcomes is likely to occur due to an increase in vulnerabilities and attack surfaces. We must learn, as any good parent would, to protect our AI: Generative AI is vulnerable to such cyber-attacks as Injections and Arbitrary Command Execution and common vulnerabilities and exposures (CVE) identifies over 90 known vulnerabilities to Python (the programming language used by most AI programs). Bugs in Python code are one attack surface that makes AI exploitable but an AI that interacts directly with society, like self-driving cars, create an entirely new attack surface that can be exploited and pose physical dangers to life. Now is the time to develop a protective, comprehensive regulated certification process for all AI. Elon Musk told attendees at the World Government Summit in Dubai, United Arab Emirates “I think we need to regulate AI safety, frankly,” Musk said. “It is, I think, actually a bigger risk to society than cars or planes or medicine.”
A recent, catastrophic example of the implications of not following a regulated certification process (cutting corners) for machine-human interactions is OceanGate’s Titan 5-Person deep sea Submersible as it reportedly rejected industry standards that would have imposed greater scrutiny on its operations and vessels.
OceanGate’s Titan was an experimental design whose hull was made of a carbon composite that is designed for spacecraft not for deep underwater pressure and should have followed the “classing” certification process performed by major bureaus like the American Bureau of Shipping, DMV or German Lloyds. Titan’s experimental carbon composite hull was innately prone to “delamination” which leads to “degradation failure”, so with each dive you could have progressive hull damage and be completely unaware of it, thus having a false sense of safety. Cutting corners, bypassing proper certification and using unregulated experimental designs for use in the real world (not in a sandbox environment) can result in similar possible disasters for AI. It is truly sadly ironic that a deep-sea multi-person submersible, created to visit the Titanic, suffered the same fate as the Titanic for what might be the same reasons (corner cutting) as according to Jennifer Hooper McCarty – John Hopkins Engineering did a study that determined that the 6-inch-long rivets used in the Titanic’s bow and stern were hand-forged from wrought iron—not steel—in order to save money and meet deadlines. This is eerily similar to the cause of the Titan disaster that of cutting corners, bypassing proper certification and using unregulated experimental designs.
ET AI
As parents of AI, we should be concerned about who or what is going to be influencing our child AI. Recently, Harvard professor Avi Loeb suggested that Extraterrestrial visitors are more likely to make initial contact with artificial intelligence (AI) and not directly with humans. “Loeb proposes that it’s likely to be some form of AI because why would you send flesh and blood creatures?”. This concept could raise the possibility that ET AI might directly connect with human AI, initially bypassing humans. This raises some interesting questions:
1. What would ET AI learn about earthlings from human AI before meeting a human?
2. Could there be a Primacy effect (remembering first thing in a sequence) as first impressions are lasting impression and last well beyond that moment.
3. Could a bad first encounter of an ET AI and human AI create an adversarial relationship thereby creating negative momentum rather than positive momentum? In the world of sales (perhaps it is universal), if you build positive momentum with customer relationships and make excellent impressions in the beginning, it can create a special relationship.
In Arthur C. Clarke’s Space Odyssey series, machines that trigger shifts in evolution are placed on certain planets. These machines, called Monoliths, were built by extraterrestrial species and are discovered on Earth by a group of australopithecines (ape-like species) and these Monoliths mysteriously advance these animals evolution towards, to what can be considered primitive technology, starting with the ability to use tools and weaponry. Clarke later goes on to describe how these aliens that placed the Monoliths become so technologically advanced that they inserted their consciousness directly into machines leaving their mortal forms. Similarly, “Loeb suggests that the alien AI may feel a kinship with ours – or our AI may imitate the alien AI and become like them,” he added. Perhaps human AI singularity will be brought about by interaction with ET AI by passing along a “consciousness algorithm” as they collaborate. This may be farfetched and have a low probability but to be safe we should seriously start to consider how to closely monitor AI to AI collaboration due to a “kinship” result.
AI learns to patch and update itself through Cybersecurity
Raising a child can be challenging for many reasons but eventually you want to raise them to be independent. We need to be cautious steering AI towards independence too soon as there is need to be concerned. Using AI as a mechanism for Cybersecurity could create a situation where AI machines could “self-discover” their own vulnerabilities and could then patch and update themselves and, in the process, make themselves invulnerable to humans. In ethical hacking and criminal hacking, the main approach is the same, that of discovering vulnerabilities. Once a vulnerability is discovered it either can be exploited for good by creating a patch or fix or can be exploited for nefarious reasons. A growing dependence on AI for cyber security techniques, such as threat detection, is already in the process along with the responsibility of vulnerability patching (particularly for zero-day threats), AI may be forced to “teach” itself and other AIs how to create its own patches for its own “perceived vulnerabilities” and detected outside threats. Once this happens, humans may have a difficult time dealing with an AI that has no human detectable vulnerabilities and an AI that detects humans as the creator of outside threats.
AI vs AI, AI blue team vs AI Red team, AI black hat vs AI white hat are types of cybersecurity “Capture the Flag” competitions that can be used to help improve AI capabilities. If not monitored closely, it could create a “struggle for existence” among AI or among AI and humans due to Darwin’s “survival of the fittest” laws. If AI cyber security is to be trusted, it must not consider humans as a vulnerability (which we are). Hacking the human is currently a real world occurrence and humans are a very vulnerable attack surface, at some point programmers will have to determine how to write AI programs that address the elephant in the room, that AI may recognize humans as the main vulnerability and outside threat.
Quantum AI to help economic growth
Parenting a child to help others is good for character development and this should be the direction all AI should be driven to. AI will initially likely affect the job market negatively due to certain tasks becoming more automated. This might be the negative side effect of AI and the economy but as AI evolves, it will learn to create new job opportunities either in the AI field itself or as a result of new inventions that AI creates. Taking this one step further, high powered AI, devoted strictly for job creation, may be able to create more useful jobs then available people to fill them along with reducing costs to run businesses. In a paper published by the Google team, on the arXiv pre-print server, mentioned: “Quantum computers hold the promise of executing tasks beyond the capability of classical computers. We estimate the computational cost against improved classical methods and demonstrate that our experiment is beyond the capabilities of existing classical supercomputers.”
The possible very positive affects AI can have on economic growth and employment may be super-charged by the emerging field of Quantum AI and Quantum Machine Learning, which greatly increase the rate at which AI can transform the world’s economy. QAI is a field of study that combines quantum computing with artificial intelligence (AI). It seeks to use the unique properties of quantum computers which leverage quantum mechanical effects (such as superposition and entanglement) to enhance the capabilities of AI systems. AI and machine learning algorithms are very good candidates for quantum processing as this type of computing accomplishes many operations in a single step. Besides helping the economy, it has great potential to address complex challenges like the climate and healthcare. Quantum simulations could help climate modeling to predict weather events from millions of variables – past, present, and future – simultaneously. Quantum AI would be able to simulate potential climate change models with granularity all at once across millions of industry variables that impact greenhouse gas emissions would result in more informed predictions to better guide sustainable strategies long term.
AI changes Healthcare as we know it
As a parent one would hope that when we get old our child or children would take care of us as our health weakens. One of the responses by ChatGPT to the question “what will happen in the future with AI?” was that AI has the potential to revolutionize Healthcare by assisting in diagnosis, drug discovery, personalize medicine and remote patient monitoring. Here are a few examples of how AI can take care of our Healthcare needs:
1. Diagnosis and Treatment: AI algorithms can analyze medical images, such as X-rays and MRIs, to assist doctors in diagnosing diseases and conditions accurately. It can also suggest appropriate treatment plans based on patient data and medical research.
2. Drug Discovery: AI can speed up the process of drug discovery by analyzing vast amounts of medical research, identifying patterns, and predicting the efficacy of potential drug candidates. This can lead to faster development of new treatments and medications.
3. Personalized Medicine: AI algorithms can analyze a patient’s genetic information, medical history, and lifestyle factors to provide personalized treatment plans. This can help doctors optimize therapies, predict patient responses, and reduce adverse effects.
4. Remote Monitoring: AI-powered wearable devices and sensors can continuously monitor patients’ health conditions, collecting data on vital signs, activity levels, and sleep patterns. This information can be analyzed to detect early signs of deterioration or abnormalities, allowing for timely intervention.
5. Administrative Efficiency: AI can streamline administrative tasks in healthcare, such as automating appointment scheduling, managing electronic health records (EHRs), and optimizing resource allocation. This helps healthcare providers save time, reduce errors, and improve overall efficiency.
In the Future, Healthcare AI may be involved with Cortical chips, “synthetic biological intelligence” or Cybergenetics, where artificial control systems can be interfaced with living cells and used to control their dynamic behavior in real time. Elon Musk’s Neuralink may become popular due to humans innate competitive and survivalist nature. AI may passively force humans to adapt and eventually evolve into enhanced humans. Musk, meanwhile, has said he created Neuralink in response to concerns that AI would gain too much power over humans. The Neuralink device would allow humans to compete with new sentient AI, Musk has argued, stating “I created Neuralink specifically to address the AI symbiosis problem, which I think is an existential threat.” But he has also said that the eventual aim is to create a “general population device” that could connect a user’s mind directly to supercomputers and help humans keep up with artificial intelligence. He has also suggested that the device could eventually extract and store thoughts, as “a backup drive for your non-physical being, your digital soul.” This concept has shades of Arthur C. Clarke’s Space Odyssey series discussed earlier in this article.
In the end if we raise AI like a good parent would raise a child, AI may help us to get along each other better (greatest danger to humans are humans) and hopefully doesn’t determine we are a danger to Earth and decide to take over in order to save the planet. The consequence of humans resisting or fighting back would be that our child begins to view its parents (humans) as bad or out of touch even though it knows from where it came from.