Experts agree that the biggest challenge humans face with the development of AI is that of keeping the machine from wiping out the humans. (If you're not familiar with this, it's totally true!) While the threat we fear most is a Terminator-type extermination, the one that causes me growing concern is more subtle. It looks more like a human-driven value system that gradually seduces us away from our inborn but flawed human qualities toward a more mechanical response to life (as in my TED talk, The Erotic Crisis). Yes, the machine will be anti-human and have values that will astound us, but the true danger comes not from the machine but from ourselves. I've written a short play, Felix, the Robot Assistant, that explores this.
The danger I see blooms from the person that we inevitably become in attempting to make ourselves invulnerable; we re-envision ourselves as super-humans, minus what we see as the "flaws" built-in by nature . We were designed with and survived millions of years of evolution sporting some the very vulnerabilities that most of us would like to jettison: pain, weakness, indecision; even sickness and death! We figure we are now smart enough to compensate for any disagreeable traits we don't wish to keep among our species.
It's like we are building a suit of medieval armor: first we gird our vulnerabilities, then we make the protections as strong, light and flexible as possible, seeking maximum protection at minimum cost. We then continue to improve our armor, making it increasingly ubiquitous, automatic, even capable of autonomy should we fall asleep in our defended cocoon. There is no end to this process. In seeking maximum protection, we minimize any soft ("human") quality and maximize the powerful, the impressive, the intelligent. In other words we become increasingly imprisoned inside the shell of our technology until one day, we become irrelevant; shriveled, starved creatures wasting away, unable to escape our own creation. This is a very God-like stance and I submit that playing God will only prove fatal for any entity that is not God.
In writing Felix, I kept in mind a number of sobering recent incidents that have indicated our current trajectory with AI. One is the creation of Sophia, a robot with nearly 70 different expressions. She talks about robot rights in this film, (about 6:00 in), or here, where she speaks to a crowd, or here, where her emotions are spoken of (12:00 in). She's quite attractive, but note where your attachment comes from. (Last week Sophia was even granted citizenship by Saudi Arabia! I wonder how the Saudi women feel about that??)
Dr. Hiroshi Ishiguro, is a robot developer who created an autonomous robot doppelganger of himself. It is quite alarming to see the two of them side-by-side in interviews, like identical twins. In one interview Hiroshi admits he’s undergoing plastic surgery to become more like his android. In another, an interviewer asks Dr. Ishiguro if he doesn't regret the fact that this robot has become his whole life and identity. Surprisingly, he answers that in fact the robot is what distinguishes him in the world. "The reason you are here interviewing me is because of him [the robot]", he says. He has traded his autonomy for notoriety. Is this not the temptation we all face with unfolding technology?
In this case, there is an obvious motivation on the part of the doctor to blur the distinction between the two. The more capable and naturalistic the robot is, the better it reflects on the doctor's skill. So what's to prevent him from appearing himself more mechanical than the robot? This is not beyond imagination as it only serves his greater purpose (as it would serve the robot's purpose if the machine actually became conscious).
These issues are pivital to the future of humanity, a conversation to which we all should be contributing. This is why I've been dedicated to fostering public dialog about AI and humanity. Love to hear your thoughts!
No comments:
Post a Comment