Ratings646
Average rating4.1
Why have I not read Asimov before?! Overall, I found “I, Robot” to be fresh, well-written, truly enjoyable and am very happy my Great Books book club chose this selection.
Unlike some science fiction series, the world building didn't get in the way of the story. In fact, the stories were much more about ethics, psychology, and sociology.
One of the biggest surprises is that very little seemed dated. Certainly, there were moments or phrases, but what Asimov got right is humanity and its seeming unwillingness to change (despite thinking it's changing) even when change may bring about a better future.
Another surprise, which I only learned by reading other Goodreads reviews after finishing the novel, is that the book is actually a collection of short stories published individually, then later woven together with the Susan Calvin interviews. Frankly, the technique was so effective that it didn't even occur to me, but other folks at the meeting noticed it right away.
One of the most interesting discussions we had today was around whether robots and humans have free will in the book's world. If humans have created robots who can create a smoother future, but robots must factor in human error (intentional and unintentional), then how much room is there for free will?
Several other books await, but I plan to return to the next installment in the Robot series and then follow to the Foundation Series.
Below are the discussion questions that our leader write for today's meeting.
Discussion Questions for I, Robot by Isaac Asimov
I, Robot began as a series of short stories written by Asimov for publication in various science fiction magazines. He later assembled the stories into a novel which used the Chinese box technique of having a reporter interview Doctor Susan Calvin, a robopsychologist at U.S. Robots and Mechanical Men. The book marks the first appearance of Asimov's three laws of robotics, which are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The three laws have become generally accepted throughout the world of science fiction, as well as in the real world, as behavioral models for robots.
The book shares a few common elements with the movie I, Robot which appeared in 2004, starring Will Smith.
Questions:
1. Did you like the framing technique of the reporter interviewing Dr. Susan Calvin? Why or why not?
2. Which short story, if any, did you like best? Why?
3. Can the three laws be circumvented? How? Does Asimov demonstrate this in the novel?
4. How are the three laws circumvented in the movie, I, Robot?
5. In later novels, Asimov explores the notion that robots can act to protect humans from themselves. He explores this in “Little Lost Robot.” How does this address the question of free will?
6. Some of the figures Asimov used for the cost of robots, space exploration, etc. seem vastly understated in the novel. Why do you think his costs for construction (as an example NS2 robots in “Little Lost Robot” cost $30,000 each) were so low? What do these costs say about the advance of inflation since the time the stories were written?
7. How does the U.S Robotics and Mechanical Men Corporation get around the first law in “Little Lost Robot” and the movie I, Robot? Do you think this programming would work?
8. Do you think that Steven Byerely is a man or a robot in the short story “Evidence”? What evidence do you have to support your conclusion?
9. In the movie I, Robot, Dr. Lanning was seen as a pleasant, kind individual. In the book, he is seen as being somewhat domineering, obsessive individual. Which portrayal do you like better?
10. In the final story, “The Evitable Conflict,” Asimov postulates that the machines (i.e. robots) will someday make all decisions regarding large scale human interactions. Once again, this seems to invalidate free will, or does it? How does this story form the basis of such movies as the Terminator series, Mad Max, etc.? Asimov also uses this assumption as the premise for his Foundation series of novels. Does such a scenario seem believable? Why or why not?