The android is an interesting thought experiment.
A robot is a being made of artificial parts, the most distinguishable being metal, though plastic and organic parts can be used. It is a machine made to resemble the human in shape, whether that is in the mere existence of arms, a head, and legs, or in the specific features like a spark of life in a human's eyes. It also has a computer or other device that can perform processes that only the human mind can perform, including an understanding of language and self-awareness. Since the golems in Hephaestus's workshop and before, we have conjured them up, and after Mary Shelley's Frankenstein they became separated from traditional folklore, finding a home in science fiction.
The term "robot" has been around since the early twentieth century; it first came to prominence when it was used in a Czech play about artificial human slaves, R.U.R., by Karel Čapek. The early pulp science fiction adapted the robot as a potential monster figure; effective presentations of such a drone are in The Day the Earth Stood Still and The Matrix. However, there were some authors that saw the potential that robots had, and adapted them accordingly.
Isaac Asimov is my favorite of these. He didn't think that we would be automatically so careless to create robots that would usurp humanity. Without erasing their dangers, he described three laws of robotics for his short stories and novels that would be hardwired into robots to prevent their rebellion and assuage social fears. The first law is that they will not injure a human being or, through inaction, allow human beings to come to harm. They must compulsively act to preserve human life. The second law is that they will obey any human order, so long as it doesn't disobey the first law. The third law is that the robot has the right to preserve its own life, so long as it doesn't interfere with the first two laws. These rules cannot be disobeyed.
There are many funny little implications to these different rules, and Asimov meticulously explores many of them: humans unable to keep from harming themselves become governed by the robots; a robot accidentally gains an unconscious and express in his dreams a desire to usurp the second law; scientists have to try to find a robot that took the phrase "get lost" literally. It goes on and on. The stories aren't perfect, of course, but they are good thought experiments.
Now, the android. The android is a robot, but in my mind, the reverse is not true. The robot is a distinct creature, and its similarities to humans are secondary to its capacities and uses. The android is made to better resemble the human, often imperceptibly, and they often mentally desire to be human.
We love flattering ourselves with the idea that a robot would like to be human. What does that mean? Philosophers have given their answers for ages. For androids, they often desire to feel emotions, to possess human intuition, to hold beliefs, and have other human qualities that we take for granted. Asimov made a couple of these "humaniform robots," among them Stephen Byerley, the Bicentennial Man, and R. Daneel Olivauw. Because they so closely resemble human beings, have a basic hardwired moral compass (particularly in the implications of the first law), and are unlikely to encounter the second law (not appearing as robots, it is unlikely that anyone will give them a forceful order), they appear exactly as an outstanding human being, and the line between humanity and the android often gets confused.
My favorite android is Data from Star Trek: The Next Generation. He doesn't have the appearance of a full human; his shiny albino exterior and yellow eyes distinguish him as different, and it's easy enough to pull away a cranial plate to display the hardware beneath. Yet he still wants to be human, and there are several layers to this endeavor. The first emerges in humor. He is talkative and inquisitive, asking a lot of questions that are inane to a six year old. He has difficulty detecting sarcasm, and will if allowed talk on and on about a subject, much to his audience's consternation. Idioms are difficult to catch, but when he learns them he exclaims, "Ah!" and explains the idiom. He has trouble catching bluffs in poker. He doesn't use contractions. His misunderstandings of human nuance creates a verbal slapstick that is fun and quirky.
Second, he can imitate (and even technically outperform) anything. He can be a master violinist, tapdancer, actor, scientist, logician, and painter, knowing every great mind and great movement in a vast array of subjects. But they're all just data. They lack the nervous energy, the spontaneity, the creativity, and the error that human masters have. He can create an ode to his cat Spot that is perfectly metrical but sounds like a robotic George Bernard Shaw. He tries to learn how to produce something more full of feeling.
Third, he has an immaculate ethical compass. Data has kept the substance of the three laws of robotics, but within them he demonstrates a great amount of free will. He steps in to help his fellow crewmates without fear, performs orders thoughtfully with considerable initiative, and respects other thinking machines and lifeforms. As a lover, though he feels no emotion, he shows considerate care and attention. As a father to an android he constructed, he takes care in teaching her and, when she dies, he feels the closest to grief that he can. As a friend, especially to Geordi, he listens intently and presents his concerns.
With all of these combined, there are several points in the series when he is a better human being than anyone else, with a combined humility, ability, and action that far surpass anyone else's. But he can't enjoy it. He can't feel it. He can't have the human parts that would make him better and worse than he is.
The android marks what humanity is and what humanity isn't. We can't set aside our feelings, not completely. We have flawed logics that, at best, operate under a mixture of intuitivity and rigorous training. We are not so good to our fellow beings as we would like but have trouble being ruthless at the same time. We have flimsy, weak bodies that can't withstand a change of atmosphere, temperature, light, or sound without some loss of efficiency. We must eat. We must sleep. We want to love and feel loved, and would often rather be loved for the wrong reason than not loved at all. Being a human is no easy thing. But that's precisely what the android wants. They see the value in our struggle, and perhaps in their benevolent manufacture they can do good more often, rather than succumbing to our flaws.
When I said that the android was an interesting thought experiment, that was part of it. There are more specific ways that the android can be thought of though. For example, in the episode "Measure of a Man," Data goes on trial because a scientist wants to dismantle him for an experiment. The issue is whether Data, as a robot, is a mere machine with no rights at all, or a thinking being who requires the protective rights that organic sentient creatures possess. What results is this dialogue between Captain Picard (Data's defense) and his confidante Guinan:
"Consider that in the history of many worlds there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do, because it's too difficult or too hazardous. And an army of Datas, all disposable? You don't have to think about their welfare; you don't think about how they feel. Whole generations of disposable people."
"You're talking about slavery."
"I think that's a little harsh."
"I don't think that's a little harsh, I think that's the truth. But that's a truth that we have obscured behind a... comfortable, easy euphemism. 'Property.' But that's not the issue at all, is it?"
Another discourse on slavery is evident in Blade Runner with the Replicants who appear and even seem to feel as humans do but nevertheless are confined to a four-year lifetime and compulsory labor off the Earth.
A similar issue is one where we create androids that realize that they are superior to us. Might they render us slaves instead? Lore, Data's brother, was of that mindset.
There are other issues that can become relevant through the android. Why, when we imagine robots taking on humaniform characteristics, do we automatically assume that the default human shape and voice should be male? Can't it be female? Something in between? We invest the android with a sex and gender, despite the appearance of its own neutrality, and it tends to be the one we treat as the default in the language - he, male, man. When female androids are made, they often get treated as special, gendered. Why are these dynamics the only ones we can think of?
Pick up a book with a robot or android in it and I guarantee there will be an issue that pertains directly to us. Picard, back to the trial after talking to Guinan, claims, "Commander Riker has vividly demonstrated that Commander Data is a machine; do we deny that? No, because it is not relevant – we too are machines, merely machines of a different type. Commander Riker has also demonstrated that Data was built by a man; do we deny that? No. Children are 'constructed' from the 'building blocks' of their parents' DNA. Are they property?
"... Your honor, the courtroom is a crucible; in it, we burn away irrelevancies until we are left with a purer product: the truth, for all time. Now someday, [Commander Maddox] – or others like him – will succeed in replicating Commander Data. It is the decision that will be made today that will determine how we regard this creation of our genius. It will reach far beyond this courtroom and beyond this one android; it will forever define what kind of a people we are – what he is destined to be. It will forever shape the boundaries of personal liberties and freedoms within this Federation: expanding them for some, dramatically curtailing them for others. Are you prepared to sentence [Commander Data] – and all who come after him – to servitude and slavery? Your honor, Starfleet was established to seek out new life: well, there it sits. Waiting."
The android is the epitome of the good of humanity, and I only hope that we live long enough to make them, treat them right, and give them the best of our heart. If we're lucky, we will gain an invaluable friend in return.
1 comment:
Just wanted to say that I enjoyed this. Your quote from Picard was STTNG at its finest. Also, as you know, I really like Isaac, too.
Post a Comment