I had the great opportunity this week to meet with one of the leading experts in the development of "Friendly Artificial Intelligence", a very small movement which is trying to envision how to assure human survival when computers become as intelligent and capable as humans. (a point in history of no return called the "singularity", after which humans will lose control over everything. Steve Omohundro demonstrates that even a simple chess-playing computer, if it becomes self-learning could take over the world! This is because humans have a very anthropomorphic view of intelligence. But intelligent machines would have no morality whatever. They would have no concept of "good" and would only appear to by following very specific directions. It seems entirely possible to program "goodness" until you try to do it. No one has and I contend no one can, as even no two people can determine mutual good, only specific "good" for themselves at that moment. (You can't even decide what's good for yourself from one minute to another, evidenced by changing your mind!)
But the march of progress will not stop for us to figure this out. I'm not a programmer but when those experts say they cannot create morality in machines I believe them. In fact I believe that only a vulnerable body that has the capacity to suffer can possibly act morally. We can obviously create killing machines, and we can create machines that can do simple tasks that resemble care, but we cannot create anything that actually cares and can make wise or loving decisions. I contend that is not possible outside a truly suffering body. So as we move toward a world controlled by machines, I pray for the collective wisdom to not empower them. Humans are great at finding short term solutions but lousy at envisioning long-term consequences. If we screw this one up like we've screwed up much of our world with greed and myopic vision, it will be the last mistake we make!
But the march of progress will not stop for us to figure this out. I'm not a programmer but when those experts say they cannot create morality in machines I believe them. In fact I believe that only a vulnerable body that has the capacity to suffer can possibly act morally. We can obviously create killing machines, and we can create machines that can do simple tasks that resemble care, but we cannot create anything that actually cares and can make wise or loving decisions. I contend that is not possible outside a truly suffering body. So as we move toward a world controlled by machines, I pray for the collective wisdom to not empower them. Humans are great at finding short term solutions but lousy at envisioning long-term consequences. If we screw this one up like we've screwed up much of our world with greed and myopic vision, it will be the last mistake we make!
No comments:
Post a Comment