Mar 13, 2023

AI Singularity Will Come in Stages

I've been fretting about the coming AI revolution for a decade now. It started when I realized that the biggest threat to the human body was going to be not climate change or political turmoil but the persistent human weakness for tech wizardry. In 2014 there were only 6 people in the world paid full-time to try to prevent AI from wiping out humans (according to AI researcher Nick Bostrom). That year I did my TED talk on The Erotic Crisis about my fears. But then finally journalists started asking what I thought were the right questions; not "will AI kill us?" but "what effect AI will have on human flourishing?" So I felt I could stop obsessing about it and return to artmaking. 

Self portrait in my AI-generated studio
Now we're faced with the game-changing appearance of AI generators, which "create" brand new text or images from human prompts. The results are eerie and downright frightening (as when Bing's Sydney insists that Kevin Roose loved it instead of his wife!) Will humans be obsolete now? Not yet, but it is looking more dire for us every day. (That's why I signed the recent open letter to pause AI development, by The Future of Life Institute.)

I've thought for years that there will not be one Singularity (when AI surpasses human capacities in all cognitive areas), but four consecutive ones, each more alarming and damaging than the last. This not only increases the threat but makes it more difficult to place. The singularity will not so much be a sudden uprising by the machine, but a gradual loss of control by humans (think of the economy suddenly failing for no apparent reason).

The final singularity will be the objective one, when AI actually does overtake humans. I contend that no one will really know when that is because it will make little practical difference. Because that will be proceeded by three others that will mark the end of human dominance on earth.

The first Singularity will be Economic, the point where AI so disrupts employment that vast sectors of the population are made obsolete, as the jobs that have supported us forever are replaced with AI programs that can do the same work, without breaks or any requirements other than power, for pennies.

The second one will be Relational, when humans will no longer be able to tell when they are dealing with a machine or a person. This is already happening in customer service and other such applications where it really doesn't matter much whether the voice on the other end is human as long as your problem is solved. But where the relation is most important–– such as between a political representative and her constituents, or AI posing as intimates–– this will be catastrophic.

The third will be a Political one, when a certain large portion of the population reacts against the strangeness of a newly unfathomable world where humans have lost control. They will blame whomever they hate most (immigrants, Democrats, techies) and, fueled by social media wildfires, launch a war against the perceived perpetrators, regardless of facts.

And of course the final singularity is Loss of human agency, but by that point it will forever be unknowable, which hardly matters. Once civilization is controlled by an invisible hand, humans will simply not matter. “The AI does not love you, nor does it hate you. You are made of atoms it can use for something else," says Eliezer Yudkowsky, quite chillingly.

AI is being developed without controls by competitors for an unbelievably huge prize, a recipe for certain destruction. Even if all parties know it's a race to doom, every one of them will rather be first than see the other guy win. This is fixed human nature, I'm afraid. Since the capitalist market is now our god, greed will be our downfall. In such an environment, AI will steadily grow in capacity while humans will only defend those places where we can see our own weakness. AI will overtake humans not in areas we imagine, but the places we never thought of, since it will operate in ways that never occurred to us! It won't be until after the takeover, if ever, that humans will finally see where our weaknesses actually lie. More likely, we'll never know how we lost that battle. That's of course too late.

I don't see any remedy, other than a full stop, which Yudkowsky recommends. More skepticism and more regulation placed on AI will help slow the crisis. But like Narcissus, we might just die transfixed with the image we see in our reflection. 

I hope we can all apply our humanity to this problem. We need us all. In the meantime, take great refuge in your relationships. Human relationships are what make life worth living. Store up your treasure there!

1 comment:

Tim Holmes said...

A friend read this and asked some great questions. With her permission, I include them here:

1. Will any humans leave Earth and establish an AI-free colony somewhere?
Yes, I think it inevitable that AI will supercede humans and colonize space. In fact humans will be left behind because we have so many pesky needs that are nearly impossible to address individually in space (clean water) and entirely impossible to address collectively (all biological, physical, psychic, social, etc. needs at 100%!) AI will be the perfect capitalist mechanism for utilizing off-earth assets into first wealth and then the ultimate wealth, which is energy.

2. Will humans have any lasting value at all to AGI?
The BIG question. I hope we will be able to convince the AGI of our value. My best hope is that we rescue ourselves by providing either or both of these:
• Creative thinking. AGI will be super-capable but will always lack real creative thinking. That's not to say there couldn't be a simulation of creativity that is near enough to human-level that humans become superfluous, but there is a real hope there because the human perspective, rooted in biology, is inherently impossible to replicate in code.
• Purpose. It seems to me that no matter how capable a planet-hopping and star-harvesting AGI becomes, it cannot develop its own native purpose. It might be programmed to colonize everything, or to maximize energy or some such, but other than pre-programmed self-survival, it cannot inherently invent an ontological purpose. This assumes of course that humans know what is relevant. It very well could be there is a mineral-level of conscious reality that AGI could uncover that we earthlings know nothing about. In that case AGI will become like the early lifeforms we know, just a touch above inanimate. After all, how do we know that minerals don't have their own kind of religion?

Blog Archive

Tim Holmes Studio

My photo
Helena, MT, United States
My inspiration has migrated from traditional materials to working with the field of the psyche as if it were a theater. Many of my recent ideas and inspirations have to do with relationships and how we inhabit the earth and our unique slot in the story of evolution. I wish to use art– or whatever it is I do now– to move the evolution of humanity forward into an increasingly responsive, inclusive and sustainable culture. As globalization flattens peoples into capitalist monoculture I hope to use my art to celebrate historical cultural differences and imagine how we can co-create a rich future together.