Who Should Guard Possibly-Unfriendly AIs?
Let's consider a scenario discussed at Less Wrong:
Obviously, Dave must be a conventional conservative religious believer (even McAndrew might be too liberal) who doesn't think simulations can have souls. If he also believes that the AI is Satan in prison, that might help."If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."
Just as you are pondering this unexpected development, the AI adds:
"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."
Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:
"How certain are you, Dave, that you're really outside the box right now?"
1 Comments:
If "Dave" is a simulation inside the computer then, presumably, his decision doesn't matter. He can't "free" the computer from the inside. If he could then the computer could free itself.
Post a Comment
<< Home