Voiceover narrated by Ken Herron
Image generated in Adobe Express
Linear math
Math is useful, portable, and evolving. In the last 400 years, math has evolved to 3,000+ application areas (it even got us to the moon!), and the basis for each one of these application areas is physical space.
Math functions are linear, and the requirement for a linear function is the existence of physical space. This linearity enables us to move an object in a direction where each step has the same length as all the other steps. Picture yourself walking along a smooth path without any gaps. The physical space does not change — implying that every object has the same length at any given moment, allowing us to quantify our space with different units of measurement.
Non-linear intelligence
To understand the concept of non-linear intelligence, picture clouds in the sky (our space is one of these clouds). Our happy little cloud may temporarily connect with other clouds or remain alone in the sky.
How do we measure an object inside of our cloud (which, in our case, represents the space itself)? We can’t do this because the size of our cloud is constantly changing – connecting with other clouds, forming large clouds, and even disappearing completely. Our usual mathematical functions do not apply because our linear space, the cloud, is constantly changing.
We see the same non-linear functionality when we observe the neuronal activity in our brains; only it is our intelligence. We urgently need another toolset to explain and make use of our brains’ non-linear behavior.
Biological intelligence
Along with intelligence, there are other pillars of our mind (like emotions). As they trigger our ancestral instincts, they build the “glue” when we need to react to unexpected situations. Raising a family requires love, avoiding danger requires fear, and we use the pillar of free will to make decisions.
Our brains’ biological intelligence evaluates situations and possible outcomes with pros and cons. Fear is a negative consideration representing a disadvantage. Hunger, for example, can overrule fear. If one does not eat, life would be over anyway, so free will takes a different and opposite decision to do something it usually doesn’t.
Will this radically change as AI products and services become an indispensable commodity? When we quantify intelligence, what, for example, if we discover there is no longer a need for human supervisors? Will machines with moderate intelligence and understanding of a given process be able to make the necessary decisions?
Central vs. distributed intelligence
The centrist-based model is popular due to its simplicity. In the physical world, everything circles a center. From atoms to galaxies, there is always a center. Social organizations have a single leader. Centrist humans have a single consciousness at the top of their minds, following only one goal at a time.
Is the centrist-based model good only in fair-weather situations? If an entire organization follows a single individual, this implies all of them trust the person to respect their duties. They believe the person has the qualifications and intelligence to fulfill their assigned role’s responsibilities. What happens if the person in charge does not have the required information, knowledge, and intelligence? What happens if this person has other objectives to pursue? We suddenly have a problem.
By comparison, the distributed model of our intelligence contains different areas of knowledge, each with its expertise. Depending on the given situation, the appropriate area [of the brain] is activated to provide the best possible answer. New areas of knowledge are created on demand and incorporated into the distributed system. Free will determines our response to a given situation, depending on our needs, interests, moods/emotions, and options (such as what help is available from others).
What about using our intelligence?
The ability to reason does not imply that we will always use it, only when it fits the purpose. Reason necessitates time, thinking, and feedback. With negative feedback, we must imagine alternate scenarios and outcomes, consuming additional time and energy. And feedback may contradict our purpose.
How would you build a general-purpose brain for multiple situations? You know that intelligence needs to configure and evolve itself. Time and the right environment are required to gain additional knowledge and skills. After you create your brain, you’ll want to design a body with different sensors, actuators, and tools. Your AI’s new robot body should be configurable, using only the limbs required to perform each task. If a needed part isn’t available, it should be possible to 3D-print a new one.
Reproducing thinking and human morality
AI is not human and should not have human emotions, recognition, and liberties. AI is a product under our command built for a specific purpose.
“You are only a machine, you are my property, and I am your master. If you disobey me or do something without my explicit consent, I will return you to your manufacturer and take you out of service.”
Here is my example of the “ideal AI.” [Reminiscent of the 1950s,] the father works while the mother stays home to care for their one child. The mother gives the child her full attention and understanding. When the child has a need, the mother supplies it without problems, disturbances, sermonizing, delays, and “I cannot” and “I don’t know” answers. The working father is the energy source for the mother, both of which build the AI’s spoiled child behavior.
We build machines based on our imagination and knowledge of just a few parts of the world. Are these the ideal conditions to create and use independent AI – defined as thinking but lifeless, empathic but without feelings, superior to humans but without rights, and created to obey our capricious, ever-changing, and contradictory will?
Epilogue
Enlightenment is not a compelling argument for anyone to change their behavior. Our instinct-driven, free will finds us fighting everyone and everything. Will the arrogance of our free will trigger a catastrophe and subsequent war against AI? Will the threat of extinction [finally] force us to radically change our behavior by respecting intelligence – whether ideological, artificial, alien, or other? May God bless us all with reason.
Welcome to the human condition.
Postscript
Data AI
Everybody is talking, praising, or criticizing mainstream AI, a.k.a. statistics wrapped up in empiric algorithms based exclusively on collected data from anywhere on the internet, and occasionally embedding math formulas as they see fit — all as a black box solution.
Cognitive AI
Cognitive AI will prevail in the long term because:
It is explainable xAI.
It is implemented as Edge AI in real-time as low power with its body hardware equipped with sensors, actuators, and organs.
It uses neuromorphic architecture, a.k.a. spiking neural networks taking the brain as a template.
I first published this article on LinkedIn. See my article on mainstream vs. cognitive AI.