AGI Can't Save Us from Ourselves
We will resist any real help that an artificial general intelligence might offer
“… i think AGI is probably necessary for humanity to survive—our problems seem too big to solve for us to survive without better tools … I cannot begin to imagine what we’ll be able to achieve with the help of AGI.” — Sam Altman, OpenAI
The current AI uproar comes with a dodgy assumption: that some future artificial general intelligence, a concept known as AGI to its friends and enemies alike, will have capabilities that are kind of supernatural. It could, they say, solve all our problems and lead to a tranquil, expansive era of abundance.
Some adherents get pretty nasty about it, as in these tweets:
“millions more will die this month to causes easily solved by AGI.”
“… load of pompous old shit … thinking every step will end humanity. This crap is why I hate AI doomers.”
“Bad-acting guruism” “These doomers are so egotistical”
This whole business is unprecedented, unknown territory and has many people expressing heated, often uninformed, opinions. But the discourse so far ignores some important points, things that we already do know. Herein is one of those, a fundamental reason why AGI won’t save us. I’ll address the other reasons in future posts.
Human Responsibility and Resistance
Human nature interferes with any AGI rescue of the human future. Nearly all our most important problems are either human-caused or in some sense human-sustained. Therefore, if an AGI or a bunch of AGIs came up with solutions, human action would prevent us from using them. This is because (1) we can’t agree on what the more important problems are, and (2) we will always fight about any solutions.
What might be important problems? There are some that have been with us forever. There’s death and aging, sickness, loss of food and shelter, war, and oppression. Then there are severe, possibly existential risks that include: war (again), rapid climate change, plague, total ecosystem collapse, supervolcanoes, and giant asteroid collisions.
To err is human, but famously we also disagree — about nearly everything. Anything that you think ought to be done for our survival and well-being will be strongly opposed by some faction or another.
Try it out. Think of something that you believe ought to happen that hasn’t yet happened. Without a lot of effort you should be able to imagine, if not actually point to, groups who prevent the thing from happening. It doesn’t matter which side of the current cultural divide you are on, or if you are in the silent and dwindling middle. Any human issue has opposition.
Here are some familiar examples to get you a feel for this civilizational logjam.
Deregulating everything and everybody versus regulating anything that might go wrong.
Drill and burn versus conserve and preserve versus develop renewables.
Going to Mars versus doing anything else.
Harvesting trees for economic growth versus radically increasing world forest cover versus using fertile land to feed us.
Increasing crops technologically versus restoring soil health.
Reward innovation and drive versus lift up the oppressed first.
Extend human life versus don’t mess with nature versus relieve immediate suffering.
Solve fusion-generated power versus deploy renewable sources.
Deploy renewable sources versus don’t destroy ecosystems.
Genetically improve humans versus vilify and demonize anyone who proposes that.
Give aid to less developed groups versus no, that only inhibits their own development.
Preserve endangered species versus solve immediate resource demand versus support conservation by selling hunting permits.
Keep pets versus don’t raise livestock for pet food versus leash pets everywhere versus adopt homeless pets versus breed for improvement.
Preserve privacy versus use data for optimized resource distribution aka marketing.
Make it harder to vote versus make it easier.
Provide historical restitution versus spend the money on current needs.
Build robots versus create jobs.
Price ecosystem services in the market versus that’s idiotic, it’s like the brain taxing the circulatory, digestive, and immune systems.
More policing versus less.
More guns and mental health services versus de-stigmatize mental disorders versus gun control versus gun buyback.
Redress historical wrongs by force versus study war no more.
Judge the past by its own standards versus learn from it versus tear down the monuments versus the old ways were the best.
Honor thy father and thy mother versus just the father versus just the mother versus stop having children versus restore population growth.
This list could be extended indefinitely. We have a way of turning every issue into one that’s zero-sum or otherwise fractured into incompatibilities. And perhaps they “really” are zero-sum, given the ways society is structured, our cognitive biases, and our social loyalties.
We share one world that is richly interconnected by communication tech, by ecosystems, by air and water circulation, by transportation, by cultural memes, and by trade routes. Yet, we can’t find any broad practical agreement on anything. That includes consensus on technological solutions or social/political solutions. Anything an AGI might propose some significant proportion of us will oppose.
If the AGI countered by tricking or forcing us into using its solutions there would still be fierce resistance from some human factions, possibly leading to war-like conflicts across the world. To prevent this, a particularly capable AGI might somehow implement the solutions along with some kind of tranquilizing effect to mute our excitement. This would mean that humanity loses control of its own destiny, and that is also perilous territory. If we are not needed to run things, then the most likely outcome is that we eventually disappear.
If we do somehow maintain control, the most certain uses of AGI will not be to solve our big problems. It will be used to extract maximum work for minimum pay; for making products that we can’t practically do without but have to pay for indefinitely; for creating wealth for a few while “externalizing” all resource costs and extraction harms to other people and to both local and planetary ecosystems; for maintaining social disparity and power systems; and for suppressing dissent. What Altman can’t imagine we’ll achieve is actually simple enough to predict: continuing business and government as usual.
To me, the point that I’ve made here would be obvious to anyone who has ever followed news media for a year, yet I have not seen it made. Future posts will counter two more arguments for AGI: that a rising tide lifts all boats, and that more intelligence will produce miraculous fixes without requiring human help and cooperation.
P.S.: On a related note: if you are not sick of reading elsewhere about AI doom, I have an entertainingly realistic scenario about the risk of takeover by AIs with high persuasive ability. We are already using those machines’ direct ancestors, which are today’s limited language models.
P.P.S.: The resistance to using help from an AGI is one form of a classic economic concept known as a coordination problem. In 2006 futurist Nick Bostrom came up with the idea of a singleton. This is “a world order in which there is a single decision-making agency at the highest level.” He imagined that a superintelligence might become a singleton, whether we want it to or not. I’m not being overly meta when I say that I can hardly imagine the fury that would meet any suggestion that the world needs a singleton.
“business as usual” I see that The NY Times agrees with me:
https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html