You
can ask anyone involved in science or robotics if they know the
“Three Laws of Robotics”, and they'll likely say yes. In fact,
you can probably walk up to any person in the street and ask them
what the they are, and they'll know.
In
case you don't, I'll give a summary. The Three Laws of Robotics were
created by Isaac Asimov, the famous sci-fi author. He states that the
rules must be followed in order for robots to work fluidly and well,
and not in anyway hinder humankind's chances of survival. It is known
and mostly followed by everyone in robotic engineering. The rules go
as follows:
- 1. A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2.
A robot must obey the orders given to it by human beings, except
where such orders would conflict with the First Law.
3.
A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.
Now,
it does not take long to tell that these rules are impossible.
Perhaps the robots, if designed in this way, could carry out these
orders – however the fault does not come by the robots themselves
but the creators.
One
of the main driving emotions in the human (or general animal) psyche
is anger and hatred. Anger and hatred are what create wars, which by
general rules can not be stopped. Already, robotics has infiltrated
the military and war field. Both computer and robot AI is being
created for the sole purpose of helping to fight wars. And while the
second and third laws may still be workable, the first and main rule
will likely never be followed.
But
perhaps Isaac Asimov knew this. Perhaps he knew that the future of an
Earth where machines – either stable or unstable – fight wars is
not only a possibility but a likelyhood. Perhaps he made these rules
in hopes that humankind may come to its senses, and revert an action
that could be the destruction of us all.
No comments:
Post a Comment