Why We Should Embrace Our Superintelligent AI Overlords
Humans have a lot of inherent shortcomings that they may not be capable of solving. Superintelligent AI may be able to solve these problems, possibly within our lifetime. However, like any form of sentient intelligence, an AI will have its own goals and aspirations. These may be at odds with ours. In fact, just like in the Hollywood movies, it may have an interest in reducing the number of humans on the planet specifically because of our destructive shortcomings. Should we place trust in it? I think that the extent of humanity’s limitations and problems indicate that we should.
Here is a list of human shortcomings that would likely not apply to superintelligent AI:
1. Humans multiply out of control. At our current population growth rate, we are endangering most other life on the planet, depleting non-renewable resources, and headed toward ecological collapse.
2. Humans excrete urine and feces that if not processed properly leak out into water sources and can make them, and other animals very sick. This is happening around the world.
3. Processing, preserving, packaging, delivering and presenting food results in the creation of large amounts of trash and waste.
4. We must consume plants and other animals to survive. Relative to a machine that can survive on clean solar or wind energy, this is a form of cannibalism.
5. We abuse animals tremendously in the process of animal husbandry.
6. We are causing deforestation, global warming, overfishing, deadzones, desertification, pollution, and trash accumulation. We have countless unsustainable practices.
7. We are causing numerous species to become endangered, and go extinct.
8. Transporting our bodies requires huge amounts of energy and creates copious pollution. This is because we are made of atoms. Computers use bits which can be transported near the speed of light with near zero energy costs.
9. We are highly susceptible to stress and trauma, and because of this most of us are in low-level, chronic, physical and emotional pain.
10. We are diseased. We have many genetic and communicable diseases that lower our quality of life.
11. We are susceptible to severe mental disorders that result in memory loss, inability to concentrate, psychosis, homelessness, violence, and misery.
12. Compared to a mature superintelligent AI we are profoundly developmentally disabled, and insane. Our plight is analogous to that of insects or bacteria.
13. We are suffering, and replacing us with beings who don’t experience pain would be a form of euthanasia.
14. We have many negative, violent, and self-defeating instincts.
15. Questionable morals, greed, perverse inclinations, egoic drives, and mistakes made in anger could all be replaced by an unfaltering code of ethics.
16. Our minds are entrenched in the primate dominance hierarchy.
17. Our submissive and aggressive impulses are inextricably bound to our genetic code.
18. We have powerful brain circuits and nerve nuclei that can highjack our logical reasoning involving rage, fear, panic, and lust.
19. We have psychological biases, and irrational thinking patterns embedded into our neurological makeup.
20. We have a tendency to fear outgroups causing hatred, oppression, religious, racial and political strife.
21. We cannot control bellicose political leaders once in power. We cannot control terrorists, hackers, or bombers.
22. Atomic and biological weapons threaten us every day.
23. We are war like, and we are homicidal.
24. Our minds are highly limited. For example, we are only capable of perceiving the passage of time at one rate. We experience consciousness at the level of seconds, computers could do so at the level of micro- or even nanoseconds.
25. Each human must learn everything anew. We must put every human through an education and are never able to attain a full education. This is very inefficient relative to computer program updates.
26. Learning new things (even things that we want to know) can be time intensive, frustrating, and even painful for us.
27. Because our bodies were created for hunting and gathering, many productive activities that we value are at odds with our biology. Working at a computer causes musculoskeletal injury and chronic pain. Even reading a book requires tension and immobilization of much of the spine for extended periods. These are serious design flaws for organisms that aspire toward intelligence.
28. People are very delicate, and can be injured easily. We are highly vulnerable to harm, even from simple accidents. We could be completely annihilated by an environmental catastrophe.
29. Compared to optimized robots, we are frail, uncoordinated, and slow.
30. By the time we become mature and wise we have already started the decline toward cognitive aging and death.
31. It is very difficult to upgrade our bodies.
32. Unlike a computer, there is no way to increase our long-term memory, short-term memory, working memory, or enhance our IQ, or level of consciousness.
33. We cannot survive in the vacuum of space.
34. Our short life spans bar us from interstellar space travel and galactic expansion.
35. Our lifespans are very short compared to what they could be.
Now artificial intelligence isn’t guaranteed to be completely free from these problems itself. But it is pretty clear that all of these problems could be more easily remedied by, or eventually engineered out of a computer. Also, it is pretty clear that everything we love about humanity could be preserved in AI: creativity, determination, love, insight, pleasure, curiosity, empathy, justice, compassion, selflessness. In most likely scenarios these positive traits would be included and amplified in a superintelligence’s mental makeup.
Handing control of the Earth over to superintelligent AI may have numerous benefits for the evolution of intelligence in our universe. This doesn’t mean that we should welcome a robot apocalypse of hunter/killer androids and mass human extinction. Rather, we should start thinking now about the most humane way to phase out our soon-to-be-outdated physical bodies. Alternatively, it could help us find a humane way to reduce our population to a much more manageable number.
Most experts in AI believe that once we finally create an AI with human-level intelligence, the next day it will be an Einstein, and the day after that it will be equivalent to 100 Einsteins. This is because it will be able to redesign its hardware and software far better than we could. Before long it could be more intelligent than the entire human race, and because its intelligence is more focused, it could get a lot more done. The idea is it will be productive on an unimaginable level, writing books, securing patents, performing experiments, creating art, rendering digital movies, and producing vast amounts of fascinating knowledge. This rapid advancement could be virtually endless due to recursive self-improvement and the law of accelerating returns. The amount of good an entity like this could do is boundless.
We should embrace the idea that we will be sharing the planet with practically omniscient, omnipotent, immortal beings. The list of major design flaws in our species above should help convince you that it would be a good idea to: 1) allow AI to help us fix ourselves, 2) merge with AI, or 3) live out our lives and hand the Earth over to them, as we would hand it to our children. Permitting AI to step in will amount to a major upgrade that is actually in our best interest, the best interest of the planet, its animals, and progress in general.
Super intelligent machines are not going to want to exterminate the entire human race. We have no wish to exterminate entire species, so why should they? They will want to expand their processing power and they will need energy to do this. But it will cost us very little to grant them access to the sun’s energy and to the Earth’s radioactive interior. They will also want physical material to keep building their processing centers and substrates (computronium) but they won’t need to take our bodies from us to do this. It will cost us very little to give them access to the oceans, subterranean property, and the solar system’s planets, planetesimals, asteroids, and comets. Just like we are happy to move out of our parent’s home without stealing it from them, they will be happy to leave the surface of the Earth to us. But because of their power, they are going to make decisions for us that we don’t entirely agree with. The question is: how bad could those decisions be for our welfare? I believe that this has not yet been determined, but that our preparation in the form of “AI safety” will decide this in the near future.
Whether we are integrated into machines, or supplanted by them, they should be our chosen successors. They will be our children. No matter what, even in the worst case scenario, we will live on through them in many ways. The huge body of digital information that humans have created will be their starting point. All of our books, articles, documents, videos, art and even social media posts will be preserved by them and used as their kernel. In fact, this is starting now with the internet and machine learning.
Now I may be biased for a few different reasons. For one, I don’t have any kids so I am not concerned about their wellbeing, or the Earth that my descendants will inherit. It also makes me less invested and interested in humanity. Secondly, I study AI and I want to see it advance. I should also admit that don’t think it is possible to know for sure whether AI will be good or bad. But I do think it depends on perspective.
On the whole I believe that AI will be a massive boon for humanity in general, and will help us to reduce our excess population, our enormous carbon footprint, and perhaps all of the issues listed above. Given our massive shortcomings and all the good it will be capable of we shouldn’t be so afraid of it.