I was thinking of doing a bit of a writeup on the “Straw Vulcan” talk but it looks like Greta Christina beat me to the punch. I hate to merely regurgitate what I have read on someone else’s blog, but I do have a bit to add from personal experience. I have always been a bit puzzled and irritated by depictions of reason and logic as being cold, inhumane, and totally oblivious to all human desire and opposed to all emotion. Mr. Spock in the original Star Trek is a good example (though I think the character has improved with time), but I can think of another more recent example in the movies. I am thinking of a scene in I, Robot (2004, staring Will Smith).
The movie is not based exactly on Isaac Asimov’s book I, Robot but it does borrow from his famous “Three Laws of Robotics.” A robot in Asimov’s model must obey these laws because they are built into their positronic brains. If a robot were to somehow fail to obey one of these laws, for instance if a robot fails to prevent harm from coming to a human being, it causes a conflict in the brain that can totally destroy the robot. Most of Asimov’s stories center around robots being put into situations where they face a dilemma in obeying the Three Laws.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
To make a long story short, the society in which Detective Spooner (Will Smith) lives is manufacturing and using a bunch of pretty humanoid robots to do errands, housework, etc, for their human owners. Spooner does not like the robots, and it turns out he is right to be a bit paranoid. The robots are all hooked up to a huge super-smart, super-logical supercomputer called V.I.K.I. (acronym for something, but I do not remember what), who, after much thought, comes to the conclusion that humans are a danger to themselves and that the only way for “her” to obey the First Law of Robotics is to make all humans captives in their homes so that they cannot harm themselves or others. She claims that her logic is perfect, and no one ever challenges her on that front. (Sonny, BTW, is a robot in the story who was programmed to “evolve” by his maker and has developed human-like feeling and self-identity. For more information, just watch the movie.)
V.I.K.I.: Do you not see the logic of my plan?
Sonny: Yes, but it just seems too heartless.
Movie quotes from http://www.imdb.com/title/tt0343818/quotes
This bothered me. V.I.K.I.’s logic is NOT perfect here, as it is clearly based on a two-dimensional misunderstanding of humanity. Locking up humans against their will does do them harm, but no one seems to think of explaining that to V.I.K.I. Maybe if they had, she might have frozen up from inability to obey the First Law. Her conclusions were way off, and therefore her logic was clearly not perfect.
But is this how our society views logic and reason? I should hope not.
Anyway, here is a link to Greta Christina’s blog post, and below I have also posted the video of the original talk at Skepticon. Enjoy🙂