A: Today we're going to talk about the Five Laws of Robotics.
B: There are only three laws of robotics.
A: Prove it.
B: ... fine. "A robot may not injure a human being or, through inaction, allow a human being to come to harm."
A: That's the first. Go on.
B: "A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law."
A: And that's the second! Three more to go!
B: Cut it out! "A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."
A: And that's the third!
B: And there are no more!
A: No, you're forgetting the Zeroth Law: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
B: The Zeroth Law is arguable, but even if you include it, it's still only four.
A: The Fifth law is "Eventually a programming error will turn all robots into merciless killing machines intent on destroying humanity."
A: I saw it in a movie once. Twice. A lot.