Little Lost Robot (short story 6 of I, Robot)
Merry Christmas and Welcome to the December issue of Hacker Chronicles!
We have finally arrived at short story number six in Asimov's collection – Little Lost Robot. It is the inspiration for the movie I, Robot which I will review in a New Year's special in just a week. That means you have one more week to watch it.
My review of Little Lost Robot does go into some mathematics and automatic control theory. I hope you're OK with that. :)
Enjoy!
/John
Writing Update
I'm deep into the editing of Draft 0, taking turns in reading it while taking notes and actually editing based on those notes. It takes a tremendous amount of time and will be my main activity this holiday season. I'm currently about a fifth of the way.
This is where I am right now, word count-wise:
December Feature: Review of Story Six "Little Lost Robot" (1947)
I, Robot is a collection of short stories by Isaac Asimov, published 1940-1950.
I reviewed the first three stories in the August issue and story four and five in the October issue.
Now it's time for story six – Little Lost Robot – which is the basis, or inspiration, for the movie I, Robot.
These are Asimov's classic Three Laws of Robotics:
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Premise
The story starts with an interview of Dr. Susan Calvin — a framing that spans most of the short stories. She's asked about interstellar travel and says:
My first connection (directly, that is) with interstellar research was in 2029, when a robot was lost.
Hyper Base is in a temporary shutdown and no one is allowed to enter or leave. Special permission to go there has been given to Calvin (Robot Psychologist) and Peter Bogert (Mathematical Director). We're thrown into the story when they take off.
We've Lost a Robot
Calvin is unhappy having to leave Earth which she had never done before. She can't see what kind of emergency would call for it.
The major-General on-site tells them what it was about when they arrive.
We've lost a robot. Work has stopped and must stop until such time as we locate it.
Dr. Calvin is not impressed.
What makes a single robot so important to the project, and why hasn't it been located?
The general explains that they kind of have located it. A cargo ship had just delivered two robots of model Nestor 10 to them and was carrying another sixty-two onboard. After a desperate search for the lost robot, they counted the ones on the ship. They were sixty-three.
We have no way of telling which is the sixty-third.
Calvin senses that something is off. If the robots were identical, they could just pick any of the sixty-three and be done. There would be no "lost" robot. So what's going on?
What kind of robots are they using at Hyper Base? she asks.
Her colleague Bogert reluctantly tells her the truth.
Hyper Base happens to use several robots whose brains are not impressioned with the entire First Law of Robotics.
Thoughts
The earlier short stories in the collection explore the nooks and crannies of the three laws, but always under the premise that they are intact. Here Asimov enters the forbidden world of not adhering to the laws.
All too often, humans overriding rules is used as a plot device. The rules were foolproof but someone is stupid enough or evil enough to disable them. But Asimov does two things here that make it work: he reveals it early and he writes "are not impressioned with the entire First Law." Note the word "entire."
Why These Robots Are Different
Calvin suggests that they just ask the robots which of them belongs to the station. Sixty-two of them should be factory-fresh and have never worked there.
However, the general has already tried that.
All sixty-three deny having worked here — and one is lying.
Calvin then concludes that they should destroy all the robots.
That is deemed too costly. She's been taken here to find the lost robot. That means she has to understand why these robots were allowed to be modified in such a profound and dangerous way.
The general explains that human workers on Hyper Base expose themselves to dangerous radiation in their line of duty. Precautions are taken, but the danger triggers the First Law in regular robots.
When it was necessary for one of our men to expose himself for a short period to a moderate gamma field, one that would have no physiological effects, the nearest robot would dash in and drag him out.
Even worse, the robots themselves can be destroyed by gamma radiation so these incidents not only prevent humans from getting their jobs done but also destroys robots.
I can't believe, said Dr. Calvin, that it was found possible to remove the First Law.
It wasn't removed, it was modified. Positronic brains were constructed that contained the positive aspect only of the Law.
The robots on Hyper Base are still restricted from actively harming a human, but they are allowed to let humans get hurt through their inaction. The remaining sixty-two on the cargo ship are unmodified with their First Law intact.
Thoughts
I find this twist fascinating and very Asimov. Even though the three laws are not intact, it makes sense. I would even go as far as to say that most humans today would not require the "through inaction, allow a human being to come to harm" part to the First Law. They'd be fine as long as robots could not actively hurt humans.
What becomes possible once the First Law is altered in this way? We shall see.
The Investigation
Susan Calvin and Peter Bogert talk in private about the situation.
Peter, don't your realize what all this is about? Can't you understand what the removal of the First Law means? It isn't just a matter of secrecy.
I know what removal would mean. I'm not a child. It would mean complete instability, with no nonimaginary solutions to the positronic Field Equations.
Calvin takes it further and explains her view of it as a psychologist.
All normal life, Peter, conscious or other, resents domination. If the domination is by an inferior, or supposed inferior, the resentment becomes stronger.
The only thing preventing robots from revolting against their inferior, human masters is the First Law.
They argue about the law only being altered and what that does for the stability of the robots' brains. Bogert resorts to the anecdotal evidence of nothing bad happening so far.
Thoughts
Robots' brains and their equations being a delicate balance is intriguing. When I read Bogert's statement on "no nonimaginary solutions," it connected to my undergrad class in control systems engineering, or automatic control. I had to bring out my book from third year at university to get to what it was. Don't believe me? Well, here it is, with my sticky index tabs and all:
Why a book on automatic control? Hear me out.
Modern military aircraft are designed with intentional instability to increase maneuverability. Human pilots cannot keep such airplanes stable so they use fly-by-wire and computers that are fast enough to achieve artificial stability. If the computer's actions get out of tune with the plane, it would crash.
The Taipei 101 skyscraper that I visited last month has what's called a tuned mass damper that counters the movement of the building to stabilize it. It's a giant metal sphere. The oscillation frequency of it is tuned to be similar to the resonant frequency of the building. If the mass damper gets out of harmony with the building, it could worsen any movement and take the building down.
Picture from Wikipedia.
Dr. Calvin is arguing that removing the "through inaction" part of the First Law will destabilize the design. Bogert says such instability would mean no nonimaginary solutions to the brain's equations.
The imaginary unit is the number i, and defined as i^2=-1. If an equation can only be solved with an imaginary number, such as 4i, it has no non-imaginary solution. A complex number has both a real and an imaginary part, such as 4+3i.
Control systems use feedback loops to adjust. Imagine the fly-by-wire system registering that the plane's nose is moving upward when it shouldn't. The system immediately adjusts to move the nose down. The speed at which the system can get that feedback and take action on it decides if it can compensate in time and stabilize the plane.
It could be that the stability Calvin and Bogert talk about is similar. Robots react to stimuli in feedback loops and the system needs to be stable, i.e. end up in desired decisions and actions by the robot. Instability could be use of excessive force, inability to make a decision at all, or ever-changing actions that counter each other.
This field of mathematics is called stability theory.
If we interpret Bogert's "no nonimaginary solution" as purely imaginary solutions with no real part I don't think we arrive at instability as defined by control systems engineering. Asimov was a biochemist and it's unclear what level of mathematics he knew.
I'd love to hear your take on his use of non-imaginary solutions here!
Interrogating the Last Person Who Saw the Lost Robot
Calvin demands to see the person who saw the lost robot last. His name is Gerald Black and he's a physicist. The doctor asks him if there was something unusual about the robot.
Nothing different about the Nestors except that they're a good deal cleverer — and more annoying.
Annoying? In what way?
Black explains how the Nestor 10 robots are calm and take their time even though he may be stressed out or in a hurry. And they tell him when he's wrong, even when they don't know the subject.
Bogert and Calvin press Black further and he admits that there was a fracas between him and the lost robot before it disappeared. He was tired, behind on his work, and hadn't heard from his family on Earth in days. The robot asked him about an experiment he had abandoned and he told the robot to go away. That's the last anyone saw of it.
You told him to go away? asked Dr. Calvin with sharp interest. In just those words?
Black is reluctant to tell them exactly what he said to the robot. But slowly admits that he told it to "go lose yourself" and called it a series of derogatory things.
The robot has probably followed orders and indeed lost itself.
Calvin proceeds by interviewing all sixty-three robots but cannot tell them apart. Bogert analyzes the audio of their responses and is similarly unable to find any anomalies. Their conversation gets heated when Bogert says he thinks the robots are harmless, including the Nestor 10.
They are? Calvin took fire. They are? Do you realize one of them is lying? One of the sixty-three robots I have just interviewed has deliberately lied to me after the strictest injunction to tell the truth.
Bogert does not view that fact as frightening.
Look! Nestor 10 was given orders to lose himself. Those orders were expressed in maximum urgency by the person most authorized to command him. You can't counteract that order either by superior urgency or superior right of command.
Thoughts
My first thought here is a maxim that I use quite often: Computers don't do what you want them to, they do what you tell them to.
That truth is the bane of software engineering. Humans are not good enough at precisely programming computers to do a thing and only that thing. We forget about corner cases or lack understanding of the exact meaning of the code we write.
In the case of Gerald Black shouting at the robot, he is acting in a human way but the robot takes it literally like any machine would. This is a challenge for any artificial intelligence – to understand that humans do not always mean what they say and that humans are imprecise in their requests.
My second thought is about "superior urgency or superior right of command." If you get a computer system stuck, you may not be able to get it unstuck regardless of your authority. The computer is no longer listening, it's "hung." The classic way of dealing with that is so called "interrupts." They are special commands that will interrupt the computer whatever it is doing. But it's so low-level that it's not something humans use any longer. There used to be a Pause/Break key on keyboards for this. Nowadays people resort to the power button instead.
Calvin's Test
Bogert thinks it's quite ingenious of the robot to lose itself by hiding among similar robots. Calvin hates that her worries are met with such amusement.
Their argument gets to a point where Bogert challenges Calvin to show him how the altered First Law in these robots could ever harm a human.
It turns out she has thought of such a setup.
If a modified robot were to drop a heavy weight upon a human being, he would not be breaking the First Law, if he could do so with the knowledge that his strength and reaction speed would be sufficient to snatch the weight away before it struck the man. However, once the weight left his fingers, he would be no longer the active medium. Only the blind force of gravity would be that. The robot could then change his mind and merely by inaction, allow the weight to strike. The modified First Law allows that.
Bogert thinks it's over the top but goes along. They set up the experiment in a way that will not risk the human's life but that the robots still think that's the case.
However, the results disappoint. All sixty-three robots try to save the human, over and over. The lost robot is doing it even though it doesn't have to. Dr. Calvin believes it's because the robot's ego has already grown to where it feels superior to humans and that it enjoys foiling her test.
An altered test is devised. They add the present risk of the robot itself getting electrocuted if it tries to save the human. The Third Law requires self-preservation and so for the Nestor 10 with modified First Law, it should be the one winning out when the First Law is weakened.
The results of the test horrifies Dr. Calvin. None of the sixty-three robots jump to save the human at all.
She interviews one of the robots and learns that it concluded that since it would itself die before trying to save the human, doing so was useless. Self-preservation won out. This whole line of thinking had come from a single robot in the group, unknown which.
After much frustration, Calvin comes up with a final test. She herself is the human under the weight. When it falls, only one robot gets out of its seat – the lost Nestor 10!
They had told all the robots that there might be a gamma field between them and the human. That would kill them and thus they would remain in their seats. But the lost Nestor 10 was the only one trained on gamma fields and could tell that it was instead an infrared field. It knew it was safe to try to save the human and so it did.
Thoughts
This is an example of try-fail cycles in fiction, with two failures until success. Asimov does a great job of convincing the reader that each test will work.
It is also a great exploration of what the altered First Law implies. I wonder if Asimov had this idea already when he created the three laws and put the inaction rule in there to prevent such situations?
My takeaway is that it's extremely hard for humans to predict how a complex system like artificial intelligence will behave in new situations. Just like it's hard to predict how humans will behave.
The coming years and decades will give us real life examples of such challenges. The latest that comes to mind is with a self-driving Cruise car this past October. A woman was hit by a car with a human driver but ended up underneath the autonomous Cruise car which stopped immediately, just as it should. However, the Cruise car then did what it's supposed to after an incident which is to slowly pull over to the side. In doing so, it dragged the woman several yards/meters at a speed of 7 mph (10 km/h). Here's an article in The San Francisco Standard about it.
In my next newsletter issue, we'll see which elements of Little Lost Robot that made it into the movie I, Robot.
Currently Reading
I finished reading The Three-Body Problem and am now enjoying the first Poirot novel by Agatha Christie — The Mysterious Affair at Styles. It's one of her most highly praised books and perfect for the holidays.
US law requires me to provide you with a physical address: 6525 Crown Blvd #41471, San Jose, CA 95160