Home » Sector Topics » Autonomous Vehicles

Computing: Evaluating self-driving cars, robots and AGI with signals—as the benchmark

  • David Stephen 
young woman using a smart phone in a autonomous car.  self drivi

If any process is [almost] completely predictable [or say routine enough], it is probably guaranteed that machines will outperform humans, implying that machines often have an overall [sole] function, for which totality is dedicated, while the human mind has several functions.

Though there are different parts of any machine with multiple ongoing processes, there is often a lead [and general] function that does not have to share priority with other functions—which may be independent of that key function.

An automobile is for motion, so is an elevator. A printing machine prints, a chatbot gives outputs, and so forth. Even if other parts of the automobile are in use, like lowering the window, using the radio, or others, there is no switch [of priority] from motion. Robots, for example, move [one function], then do what is next when they arrive at the destination. It is generally one [main] function, without a switch in priority, to be optimal.

Humans, in any instance, often have one most prioritized function, which could be sight, touch, hearing, or others, but there is often a shift to something else in the next instance—which could be the fraction of a second. There are also several pre-prioritized functions in the mind, which could become prioritized at any moment, including those of internal senses.

While it is possible to do several things in the same interval, listening, watching, using the hands, only one thing is the most prioritized per instance, making it totally interpreted beyond others. There are several things within sight, or several sounds within ear shot, but only one is seen or listened [to] properly, with switches, abundant.

This makes it difficult for humans to perform as great as machines, often, for routine tasks. The human mind switches in functions. Routines and tasks are still possible, but there could be several slips or lack of linear efficiency due to changes in prioritization.

If all the processes of mind are assumed to be 1, with prioritization taking a measure and pre-prioritization taking the rest, there can only be enough maximum that the process in prioritization could take, allowing useful minimums for the rest of pre-prioritization across the mind. This also limits how much extent is possible for several processes, like speed, or certain other functions, since there is process sharing on the mind—with measures for others, not just prioritization taking the entire capacity of 1.

If a machine does a task and a human does a task, could there be a comparison between what goes on in the human mind with the machine? Say a robot moves and a human moves, is there an evaluation basis for similarity? A benchmark could be the ability to switch in priority from one function to the next, even while moving. For example, while walking, an individual’s touch [by the wind] is functioning, so is hearing, thoughts, and internal senses, and there might be switches across prioritization in those moments. The robot, however, does not have this. Evaluating a robot could depend on that comparison of prioritization, with the human mind. Prioritization can also become a basis for new robot policy towards generalization, shaping how tasks are interpreted and approached.

Prediction

It is not that the external world is less predictable than digital, but that streams-collective for which humans interact with the world outnumber what is possible for robots, AI, and autonomous vehicles [AVs]. This means that humans are getting several sensory inputs, with many becoming prioritized—briefly and changing, ensuring a total interpretation in an instance, to relate better with the external. This is different for machines.

The human mind does not predict, but often has an early interpretation of events to process and move to the next, resulting in several distributions including corrections [or another direction of relays] if the initial interpretation were inaccurate. [This early interpretation is conceptually a function of early splits of some electrical signals in a set].

There are several aspects of the road that are quite predictable [anticipatory or routine] where AVs could operate greatly, but there are much more possibilities even with say vision for the number of streams that can be processed by the human mind than say computer vision—for driving.

Humans have main vision and peripheral vision, both could be in awareness [or pre-prioritization], briefly, yet they deliver a better image processing for driving than AVs. This is because it is easy to quickly prioritize changes when there are breaks from routines [or predictions], much more than computer vision can handle. Also, some prioritizations are so brief because they have been prioritized before and they do not need a large measure. They can also continue to make interpretations in pre-prioritization.

Control

Another advantage of the human mind, over machines, is the ability for control. Control is not available in one part, but in several parts. There are functions that cannot be intentionally controlled by the mind, but there are several others that can be controlled. This means that there is a spread of possibility from [sets of electrical and chemical] signals, conceptually, to make it possible to use intentionality. For example, control is the reason that an individual could feel one way but act another way. Control is possible, by some spaces-of-constant-diameter, in the sets [of electrical and chemical signals] where they are available, conceptually. Control also allows for responses to switches, among the various sensory inputs that come to mind. Control can be used to sometimes alternate what sets [of electrical and chemical signals] should be prioritized.

This means that routine or not, there is the ability to respond to changes as they appear, for several aspects of the functions of the mind, making it possible to not just be able to do what is routine, but to also adjust when the routine falters.

Self-Driving Cars

To have full autonomy, first, self-driving cars would have to have a fully digital sphere, where they would operate the physical world. This means that they will not have a direct physical to digital interaction with computer vision and others, but they will have at least two-to-three layers of digital, before reaching physical.

The vehicles will have to be driving in digital, doing everything in one digital layer, before another digital layer, then the interpreted digital layer—at the frontier of the physical world. The reason for this is so that they can run routines, converting entropy to routine, even in cases where they have not been trained, to ease the suddenness with which they deal with the ultimate physical world. This is different from waypoints, which are also a physical-digital direct interface, but what is needed is at least two or three digital interfaces. This can be adapted to improving robots in general as well, better than simulated robot parts.

Also, AI chatbots worked better, because the streams in digital are already compatible [vectors] with what it can use, while for AVs, the streams of the external are sometimes beyond what it can use, making it necessary to have [multiple digital] conversion layers.

This would be a form of sensory integration, as well as a form of thick sets [of electrical and chemical signals], where similarities, conceptually, are obtained in the human mind, making it easy to identify whatever is common for doors, windows and others, while thin sets are only for what is unique.

The next would be control, with the ability for control that is not just for the main purpose of driving but control against entropy as well, such that parts of the main could have their independent functions for control rather than centralized.

Artificial General Intelligence

If an AI chatbot does any task that the human mind can do, the measure of intelligence of that AI chatbot is a function of comparison to what would happen in the mind if the same task were to be completed.

Simply, machine’s approach towards human intelligence is a comparison to how human intelligence works [per task in question] in the human mind. What relays would make a human write poetry? The relays that made that possible is a level of intelligence for AI that can write poetry.

Extricating relays in the human mind would be appropriate to evaluate and make a benchmark for AGI, to track its proximity. Though, without control across areas, it might remain distant. The human mind is theorized to be the collection of all the electrical and chemical signals of neurons with all their interactions and features, in sets, in clusters of neurons, across the central and peripheral nervous systems. This will be the ultimate benchmark for AGI, robots and self-driving cars.

There is a recent announcement by Google, Google launches the London AI Campus, stating that, “AI has the power to change the way we fundamentally live, work, and learn, through its capacity to assist and empower people in almost every field of human endeavour. That’s why it’s important that we support the next generation in being equipped with the right digital skills to thrive. Today, Google launched the AI Campus, with UK Prime Minister Sir Keir Starmer attending to show his support for our groundbreaking initiative to improve digital skills in the UK in our London home and his constituency. The Campus, situated in Somers Town, Camden, has been developed in partnership with Camden Council and Camden Learning and is home to a two-year education pilot aimed to help inspire, inform, and educate local sixth form students in the field of AI. The pilot will offer students access to cutting-edge resources on AI and machine learning, as well as offering mentoring and industry expertise from Google, Google DeepMind, and others. Students are also provided with real-world projects which connect AI to diverse fields — including health, social sciences, and the arts — to allow them to explore the range of local and global challenges that AI can be used to address. The initial cohort is already underway, with the 32 students reflecting the diversity of Camden’s post-16 student body. Preference was given to applicants from underrepresented groups, including those eligible for Free School Meals.”

There is a recent report by The Yomiuri Shimbun, Nissan, Mitsubishi to Establish Joint Venture for Level 4 Autonomous Driving Service; EV Batteries also Within Scope, stating that, “Nissan Motor Co. and Mitsubishi Corp. have decided to establish a joint venture by the end of fiscal 2024 to provide services involving self-driving cars and electric vehicles, it has been learned. The two companies aim to provide a service to transport passengers using Level 4 autonomous driving technology — in which the vehicle under certain conditions does not require a human to be involved in driving — and electric vehicle batteries for at-home energy storage, among other services. The joint venture will be funded equally by the automaker and the trading company. Verification tests will begin in 2025. Regarding autonomous driving, Nissan is currently developing the vehicles, while Mitsubishi is working to commercialize a system utilizing artificial intelligence that can figure out optimal routes. Based on the results of the developments, the joint venture will operate unmanned taxis and other services in accordance with the government’s deregulation measures. The services will first be introduced in Yokohama, Kanagawa Prefecture, and Namie, Fukushima Prefecture. Nissan is conducting demonstration experiments on automated driving and other technologies in both locations. In terms of EV batteries, the two companies are looking into a service that would connect EVs to homes and power grids, allowing people to use the electricity stored in their cars at home or sell it to power companies. They are also eyeing a project that promotes the collection of used EV batteries for secondary use and recycling, as a problem has emerged in which many used EV batteries end up overseas. Nissan plans to increase sales figures for businesses related to autonomous driving and EVs to ¥2.5 trillion by fiscal 2030, but the automaker had faced difficulties in conceptualizing businesses that would utilize the vehicles themselves. Mitsubishi is expediting its moves to invest in start-ups and other companies that develop software in autonomous driving, anticipating future expansion in the market for autonomous driving against the backdrop of the declining birthrate, aging population and shortage of labor. The company is also working with major automakers in the field of EV batteries.”

Leave a Reply

Your email address will not be published. Required fields are marked *