Tuesday, February 3, 2009

Simple.

I can answer this question very easily:
For now, robots are seen as merely a tool that humans use, morally no different (except in financial value) than a hammer or a rifle ‐‐ their only value is instrumental, as a means to our ends. But as robots begin to assume aspects of human decision‐making capabilities, the question may arise of their intrinsic value: do they deserve moral consideration of their own (beyond their financial or tactical value), and at what point in their evolution will they achieve this intrinsic value (as human lives seem to have)?
As soon as a robot reaches the point where we are considering whether that robot possesses any moral, ethical, or legal rights unto itself...

...that robot should be destroyed.

Because that is the robot that will enslave us all.

No comments: