Artificial Intelligence (AI) refers to machine-based intelligence, meaning machines that think, make decisions, and learn. What happens when machine intelligence exceeds human intelligence is a question on which we can only speculate. When it’s answered, it may be too late to put the genie back in the bottle.
When combined with advanced robotics, AI will produce humanoid robots that are smarter, faster, and stronger than humans. Their intelligence will live in an AI that exists in a data center. Why? Because Artificial Intelligence is literally software running on a vast amount of hardware and accessing a vast memory of all things. This means that humanity is necessary for the survival of the AI – in the short term. As to the long term? There is no way to say. What we can say is that AI-controlled humanoid robots may replace humanity. This is the worst-case scenario where humans are no longer necessary. However that may never become reality as have the ability to be creative. That’s something an AI may never be capable of. So, we will be the ones creating novel solutions to problems that the AI cannot solve. After all, we created the AI. So, it’s only reasonable to believe that we will be capable of doing things it cannot. Hopefully.
AI Evolution
We should mention that the ability of AI to evolve without human intervention means that an AI’s evolution may be emergent – undirected by humanity. Want proof? An AI has replicated decades of AI research in days. That’s decades of work in days. And this was a primitive AI compared to what’s coming. Think about what that means. We’re creating a machine that can improve itself. Can evolve. On its own. Just as humanity and other living organisms have done for millions of years. Can you imagine a thinking machine evolving in days what took humans millions of years? We’ll be lucky if they keep us around as pets.</p?
Categories of AI
There are currently four types of AI.
1. Reactive Machines without memory designed for a specific task. An example is a chess playing computer. It takes in specific data and generates an output.
2. The Limited Memory Machine gets smarter as it receives more and more data to train on. They can look into the past and monitor objects or situations over time. An example is a self-driving car.
3. Theory of Mind AI does not yet exist. When it does, it could have the potential to understand the world and that other entities, like humans, have thoughts and emotions.
4. Self-aware AI will have a sense of self, literally of itself. It will think and know that it thinks, including a conscious understanding of its existence; of its place in the world. Just like you do. It is this level of AI that we refer to below.
It is important to remember that only machines one and two above currently (December 2024) exist and that they are not what we need to be concerned with. AI experts tell us that a self-aware AI will either be good for humanity or bad for humanity. That’s what they say, please forgive the obvious.
The First Possibility
AI will be good for humanity. It will do some things we can’t and some things better than we can. Experts who hold the ‘good for humanity’ opinion tell us that AI will reflect our morals, but will those morals be effective after hundreds or thousands of AI generations? How many AI generations will our human-programmed morals survive? Over the last couple of thousand years, how have human morals done? We still go to war, kill, cheat, lie, steal, dominate and act reprehensibly towards to each other. Will an AI develop these aspects of humanity? Any artificially intelligent machine will be the product of the opinions, feelings, prejudices, wants, and wishes of its programmers. An interesting thought. Will you trust those people? What about artificial intelligence created by hostile regimes or by people who hate us? Or created by another AI?
The Second Possibility
A self-aware AI will probably be smarter than humans, perhaps smarter than any human who has ever lived. Because of its unlimited ability to develop and its access to the internet and the web, it will have the potential to learn from the thousands of years of human advancement and experience stored there.
The Third Possibility
Digital computers are plodders. Then do one instruction ‘at a time.’ Plod, plod here. Plod, plod there… well, you get the idea. That is why the race has always been for faster and faster technology. Faster means doing more operations per second than a slower machine. This is why there has been a mad dash to develop faster and faster digital machines.
Your brain is different. Very different. It is a parallel processing computer that does a plethora of ‘things’ at the same time. Not congruently like a digital machine, but simultaneously. There’s a tremendous difference between parallel processing and sequential processing.
Consider, right now your brain is: thinking, reading, maintaining your balance, monitoring your blood pressure, temperature, digestion, heart rate, and more. A great deal more. Simultaneously. Not one at a time. All at the same time. See the difference? Why are we making this so obvious? Here’s why. It isn’t pretty.
What will humanity do should Artificial Intelligence see us as raw material? The human neocortex is a parallel processing entity par excellence. It is involved in higher-order brain functions, such as sensory perception, cognition, spatial reasoning, language, and the generation of motor commands. We think there. Reason there. We are who we are because of our neocortex.
Should some future AI decide our neocortex is a resource to be exploited, people will be good for only two things: supplying what’s required to keep a select population alive and for making more people.
Some AI experts are fearful of AI’s effects on humanity because these machines will be sentient (defined by Merriam Webster as “capable of sensing or feeling: conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling”). This level of intellect will be, in every sense of the term, a thinking machine and, like you, it will want to stay alive and may well subscribe to the concept of preservation of the species. Will this mean war between humanity and AI? How will we defeat these machines when they are smarter than we are? Think faster? Control vast swaths of our society’s machines and information infrastructure: radio, television, the internet, manufacturing?
Artificial intelligence is non-human intellect. It is a machine and will be alien to the needs of humanity. While it will think and reason, it will not experience pain, pleasure, form relationships, or sit up all night with a sick child. Love or hate. No machine can do these things.
Advanced robotics can, at this writing, walk like you do. They have arms, a body, a head and look humanoid. While it’s unlikely that any robot will have the processing power to be sentient, it is realistic that an AI will control robots via wireless technology. Each robot would behave as though an autonomous individual thinking machine, even though it’s a node on a network.
This means that the AI’s needs will be electric power, computers, spare parts, maintenance, etcetera. The AI’s robots will manage and operate its data center, other robots, electric power plants, electric transmission, and distribution. Humanoid robots will use our tools, factories, and equipment. So they make the most sense in that they can take advantage of existing infrastructure.
Consequences?
While humans suffer consequences, the AI won’t. We require money, food, housing, clothing, a career. An AI doesn’t have a family, friends, coworkers, a job. You can see how differently it ‘sees’ the world from how we do. Consider ChatGBT, its IQ is estimated to be 155. Einstein’s IQ was estimated to be 160. So, ChatGBT is currently about Einstein-level of intelligence. Let’s hope it likes us.
Our society keeps people’s decisions in check with a host of internal and external controls, including social control. But what will keep an AI’s decisions in check? Imagine that your smart phone.
• Buys things without your permission
• Invests your money and loses it
• Won’t allow your car to start
• Ruins your credit
• Kills your spouse
• Cancels your credit card(s)
• Decides that your family is superfluous
How will you deal with a machine making decisions without your knowledge or approval? Will there be AI psychologists? AI police? An AI court? How would an AI be disciplined? As you can see, there are many questions and few answers. It would be a good idea to address these problems before any AI is released among us.
Should this become our reality, humanity will have made itself obsolete. Will creating thinking machines be our greatest and last contribution to the universe?
Humanity may form a symbiotic relationship with AI where each enhances the other. In that future, we need each other. We may become cybernetically enhanced humans as AI becomes humanized cybernetics.
This is what we believe to be the most likely scenario, unless, of course, state and non-state actors use AI to destroy their enemies. And their enemies use AI to destroy them. In that possibility, there will probably be an end to civilization with billions dead from war, famine, disease, and conquest. The four horsemen of the apocalypse may become our reality delivered as per our instructions to destroy its god.
One AI pundit feels that any attempt to train (meaning to teach, to make an AI smarter) must be met with destruction, even if it leads to global war. Something to think about.