I
am fascinated and not a little alarmed by the debate on caring robots. Yes, we are soon facing a shortage of caretakers, especially for the elderly, and it would be helpful to call a Nannybot every so often. but I keep returning to one gnawing question that haunts the whole idea: how do you program a machine to "care"? I can
understand how a machine can appear to "want" something,
favoring a certain outcome over another. But to talk about a machine
"caring" is ignoring a very crucial point about life: that
as clever as intelligence is, it cannot create care. We tend to love
our kid more than someone else's. So you could program a machine to
prefer another in which it recognizes a piece of its own code. That
may LOOK like care but it's really just an outcome. How could you
replicate, for example, the love a parent shows for a kid they didn't
produce? What if that kid were humanity? So too with the idea of programming not what we want, but what we really want! (This problem is called Coherent Extrapolated Volition). Sure you can keep refining the resolution of an
outcome, but I don't see how any mechanism can actually care about
anything but an outcome.
While
"want" and "prefer"may be useful terms, such
terms as "care", "desire", "value"
constitute an enormous and dangerous anthropomorphizing. We cannot
imagine outside our own frame, and this is one place where that gets
us into real trouble. Even whole brain emulation (where scientists replicate the mechanics of the brain) assumes that both that our
thoughts are nothing but code and a brain with or without a body is
the same thing. Once someone creates a computer code that will recognize
something truly metaphysical I would be convinced that a caring machine might be possible.
No comments:
Post a Comment