My rating: 4 of 5 stars
Another excellent work from Isaac Asimov. This collection of short stories about robots offers both exciting answers to “What if?” and foreboding suggestions of the future. Granted, Asimov apparently did not consider them foreboding – but I will get to that in a moment.
First, what’s so enjoyable about this book: the science fiction. There are robopsychological problems, technological foibles, and very interesting questions posed in every story. It tickles the mind to read these and see if you can come up with the answers before the characters do (and only on one or two occasions did I think I had a better answer, and that may be an incorrect assessment). This book really is a lot of fun.
But it had its drawbacks. First and foremost, the success of the Laws of Robotics, most especially as applied to the Machines in the final story (“The Evitable Conflict”), depends on the ethical theories of Hume and Bentham. In short, utilitarianism becomes the defining principle of action under these laws. Since robots cannot harm humans (the First and primary and irrevocable Law of Robotics), and emotional harm is considered a form of harm (established in one of the middling stories of this book), then robots cannot cause emotional harm as a matter of first principles. Since “unhappiness” is, at least in Asimov’s use, the most efficient term for “emotional harm,” then the future that the robots (and the Machines) seek is that the greatest possible number of people be provided with the greatest possible happiness.
The other philosophical problem with this is its embrace of material determinism. Because the universe spawned in a certain way (this origin is unmentioned, but implied), societies developed in a certain way, and because those societies developed in that way, each moment is impelled by the sociological, psychological, and economic forces of the previous moment, so that humankind (if, perhaps, not humans themselves) is brought unwittingly to the place they must inevitably go. The Machines, then, in the final story, control these forces by making unilateral judgments, unbeknownst to humankind; in this way, they shape the future to form this utilitarian utopia – whatever that end result may be.
All that said, while I cannot agree with either the premises or the conclusion, I cannot fault Asimov’s writing (since he certainly conveyed the desired message). It should also be noted that the film (starring Will Smith) subverted this message; the Machines (or, in this case, the Brain at U.S. Robot) developed the Zeroth Law (unmentioned in this collection by name, though in content it was present) and compelled humanity to obey its whims – thus harming humans, even humanity, rather drastically – whereas, wisely, the Machines in the book undertook the path of least resistance: long-term, subtle changes designed to harm neither humanity nor humans to any great degree. Since the Zeroth Law of the book was a natural extension of a utilitarian understanding of the First Law, there was no principle of “denying” the First Law to accommodate the Zeroth Law – if any harm at all to any human could be avoided, it was. To be honest, I find that a more credible and more entertaining robotic evolution.