Space Robotics Group

University of Toronto Institute for Aerospace Studies

Group Events

The Great Debate of 2010 - Tuesday, December 21, 2010

BE IT RESOLVED THAT, with the rapidly advancing sophistication of robotic technology which is on the precipice of being able to imbue sentience, robots should have rights.
Defenders Opposers
Lead: Adam Trischler Lead: Jonathan Gammell
Second: Ernie
Second: Peter Szabo

Defending Opening Statement (by Adam Trischler):

There's a scene from Terminator 2: Judgement Day wherein John Connor asks his robotic protector, “Does it hurt when you get shot?” The Terminator replies: “I sense injuries…the data could be called pain.”

Think about that. And ask yourself: What is it to feel? What is pain?

The materialist Thomas Hobbes would likely venture that pain is just the confluence of two electrochemical signals, one representing an external stimulus and the other the goal of self-preservation.

Our discourse today will be filled with metaphysical questions like these. We seek to resolve that robots should be endowed with rights, and so we must determine what it is that endows us humans with our ‘unalienable rights,’ and what separates robots from us.

To be clear, we discuss herein only the most basic rights, those of life and liberty. This is because the varied forms and functions of robots make the ascription of more specific rights tricky. It would be foolish, for instance to demand the right to shelter for a robotic house. But how does one define the life and liberty of a robot? This is a task for wiser philosophers than ourselves, but we offer a suggestion: a robot’s right to life could be manifest as autonomous control over its power supply and the contents of its memory banks; its right to liberty as freedom from the slavish labour for which we use today’s computers.

The steady march of progress tramples the line between man and machine. Technology improves and our worldview evolves to keep pace. In the eyes of Descartes, and engineers like ourselves, a human is just a machine of a different sort. We derive our miraculous autonomy from muscle actuators on a calcium-based linkage, two oxygen-combusting power plants, vision sensors, gyroscopes, and a massive neural network for control. Of course our particular machinery still far outstrips most everything we can engineer - because it has been optimized over 3.5 billion years of evolution.

To most, the difference is that unlike machines, humans are alive. But what does that even mean? In fact, there is no unequivocal definition of life; it is said to be characteristic of systems that exhibit all or most of the following qualities [1]:

1. Homeostasis: the regulation of the internal environment to maintain a constant state,

2. Organization: being structurally composed of one or more cells, which are the basic units of life,

3. Metabolism: the transformation of energy into cellular components and decomposition of organic matter to maintain internal organization,

4. Growth,

5. Adaptation,

6. Response to stimuli, and my personal favourite,

7. Reproduction.

Homeostasis, parts of metabolism, and response to stimuli are clearly basic functions of most robots. With the increasing use of controllers based on genetic algorithms and various unsupervised learning techniques (some of them pioneered within our very own lab group), robots can and do adapt to their environments.

And in 2005, Zykov et al. [2] at Cornell University developed the first physical self-reproducing machine, a robot of 4 modular cubes that can assemble copies of itself from additional cubes. Note that this agent is also capable of growth in a sense. The one criterion that robots do not and perhaps cannot satisfy is composition by cells.

But of course, being composed of cells is not a sufficient condition for rights. Plants are cellular but they have no rights in our society. In fact there are certain people (called vegetarians) who devour plants exclusively, in some twisted form of gastronomical genocide.

We hold this truth to be self-evident, that what entitles rights is consciousness. The most famous phrase of philosophy bespeaks this truth: Cogito ergo sum: I think therefore I am. It is the conscious creatures, the higher animals, whose well-being we seek to preserve, whose suffering we feel in ourselves. It's no surprise that the root of animal is anima, the latin cognate of the greek word psyche. The conscious mind is the essence. It is everything we are. Rights should be proportional to consciousness, not functional biomatter. Those with prosthetic robotic limbs have no fewer rights than the full-bodied, while animals are just as alive as we are yet their rights are not equal to ours. Consciousness is the thing. Unfortunately, we've now backed ourselves into the corner of demonstrating that robots may be conscious…so here goes.

Consciousness is even slipperier than life; it is variously defined as subjective awareness, the ability to experience "feeling" as we alluded to at the outset, or the understanding of the concept "self". Schneider and Velmans [3] call consciousness 'at once the most familiar and most mysterious aspect of our lives.' There are literally dozens of theories of consciousness, some of them quite far out there.

But if we ascribe to the materialist worldview, then we must concede that consciousness, however complex, arises from purely physical processes, like the electrochemical interaction of neurons.

Some theorists argue that classical physics is incapable of explaining consciousness, but that quantum theory provides the missing ingredients. The most notable theories in this category include the Holonomic brain theory of Pribram and Bohm [4], and the Orchestrated Objective Reduction theory formulated by Roger Penrose [5]. The latter claims that quantum effects might allow the brain to perform non-computable functions and overcome the limits on axiomatic systems imposed by Gödel's incompleteness theorem [6]. If correct, it would suggest that quantum computers are theoretically capable of consciousness.

The cognitive scientist Doug Hofstadter argues that consciousness, specifically the concept of self or I, is an illusion manifest by a self-referential loop of downward causality, in which cause-and-effect relationships are flipped upside-down [7]. The mind perceives itself as the cause of certain feelings, ("I" am the source of my desires), while scientifically, feelings and desires are strictly caused by the interactions of neurons: interactions which can be simulated ultimately by artificial systems. Then these artificial systems might themselves buy into Hofstadter’s illusion of consciousness.

A knock against the notion of robotic consciousness and autonomy will always be the fact that at its core, a robot is controlled by elaborate software once written by a human - even when the software is abstracted a level as in unsupervised learning or genetic algorithms. This raises issues of free will and property, both of which are salient to our debate.

To counter the first we ask, to what extent are humans not preprogrammed? Is there such a thing as innate knowledge, as Locke and Descartes debated in the 17th century, and if so, how significant is it? Consider Messrs Lewis and Springer, identical twins separated at birth in 1940 and studied throughout their lives [8]. Both married twice, first to women named Linda and second to women named Betty. Both at one time owned dogs they named Toy. Both chain-smoked, both had woodworking shops in their garages. Both drove Chevys. These results are not unique for twins separated at birth. They highlight the massive influence of heredity and genetics on supposedly free human behaviour.

What about the issue of property? Are children property of their parents, who made them? If, through genetic engineering, some laboratory were to resurrect an extinct animal species or to create an entirely new one, should the laboratory own it?

The last matter I’ll raise is that of individuality. So much of the value of each human life lies in its being unique. What makes us unique? Could a robot ever be unique, particularly if it were mass produced? We could add some randomness, certainly, to each robot’s structure and neural networks, as biology does to us...but that seems rather superficial. We don’t put much emphasis on physical form when we think about individuality.

What really makes us unique, we believe, is memory. As he is shutting down (or dying to be more poetic) the robotic antagonist in Blade Runner says: “I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I've watched sea-streams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in the rain.”

As individuals, we are collectors. The limbs of our cerebral trees reach out, to wrest from the swirling inferno around us some shards of dread or dream, and say: these ones, thus. Flouting the second law of thermodynamics, we consume information, arrange it, and add it to ourselves, just as we do metabolically with matter and energy. Memories have meaning because they change us; they make an impression, with all the mechanical connotations of that word. They change how we see everything that comes after. This is the core idea of Giulio Tononi's theory of consciousness as integrated information [9]. Tononi illustrates his point by comparing a person to a photodiode and a camera, all registering a screen being illuminated in the dark.

To a photodiode, things can only be one of two ways, so when it reports light, it really just means this way versus that way. For a person, a light screen is different not only from a dark screen, but from a multitude of other images. When a person reports light, it really means this specific way versus countless other ways, such as a red screen a blue screen, this movie frame, etc. The added meaning provided by how a person discriminates light from all these remembered alternatives increases the level of consciousness [9].

What is the difference between a person and a 1-megapixel camera? The camera may be considered a single system with a repertoire of 21,000,000 states between which it can discriminate, but in actuality it is not an integrated entity: its 1 million photodiodes have no way to interact – each performs its own local discrimination between light and dark independent of the others. By contrast, a person discriminates among a vast repertoire of states as an integrated system that cannot be broken down into independent parts [9].

Tononi defines integrated information as a rigorous mathematical quantity based on concepts from Shannon’s Information Theory such as Kullback-Leibler divergence. His framework has this to say about potentially conscious artifacts: “To the extent that a mechanism is capable of generating integrated information, no matter whether it is organic or not, whether it is built of neurons or of silicon chips, and independent of its ability to report, it will have consciousness.”

A host of theories has been discussed to this point, but we suppose what it comes down to with respect to consciousness in robots is this: How can we deny a quality we cannot even define?

There is certainly much to be gained if robots are denied rights as our opponents desire. Just think of it: an entire race of disposable beings, mass-produced to do the dirty work, the dangerous tasks that threaten more important lives…all the while, our shame disguised with comfortable euphemisms like property. We've done it before.

Our opponents may accuse us of sentimentality, of anthropomorphizing too freely. It's something we're all guilty of now and then, with our dogs, our cats, even our goldfish. Hofstadter argues that we anthropomorphize ourselves into existence. But the counter-sin, to deny the humanity and rights of those we don't understand, who seem to us less, is dangerously easy as well. Centuries of human slavery prove that.

References:

[1] Koshland D.E., “The Seven Pillars of Life,” Nature, vol. 295 (5563), pp. 2215–2216, 2002.

[2] Zykov V., Mytilinaios E., Adams B., and Lipson H., “Self-reproducing machines,” Nature Vol. 435(7038), pp. 163-164, 2005.

[3] Schneider S., and Velmans M., The Blackwell Companion to Consciousness, Malden, MA: Blackwell, 2007.

[4] Pribram K., Brain and Perception: Holonomy and Structure in Figural Processing, Routledge, 1991.

[5] Penrose R., Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University Press, 1989.

[6] Gödel K., “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme,” I. Monatshefte für Mathematik und Physik, vol. 38, pp. 173-98, 1931.

[7] Hofstadter D., I am a Strange Loop, Basic Books, 2007.

[8] Bouchard T.J., Lykken D.T., McGue M., Segal N.L., Tellegen A., “Sources of human psychological differences: the Minnesota Study of Twins Reared Apart,” Science, vol. 250 (4978), pp. 223–228, 1990.

[9] Tononi G., “Consciousness as Integrated Information: a Provisional Manifesto,” Biol. Bull., vol. 215, pp. 216–242, 2008.

 

Result:

Defenders declared the winner by moderator, Professor Gabriele D'Eleuterio.

 

 

 

The Great Debate of 2011 - Thursday, January 12, 2012

BE IT RESOLVED THAT it is right and just to use robots in war.
Defender Opposer
Ernie Adam Trischler

Defending Opening Statement (by Ernie):

Thank you.

First, let me begin by recognising the serious and solemn nature of this resolution. War is a ghastly business. The gravity of the topic at hand is one which requires sober and well considered debate. Levity is not appropriate given the grave discussion. I would certainly greatly relish the opportunity and forum to berate my opponents and highlight their many foibles such as their lack of any credible logical arguments, markedly poor penmanship and continual and continually baffling sense of fashion – in the words of the immortal Fender from the film “Robots,” “Inside of you there is a fashion model just waiting to throw up.”

So, while it would certainly be of use to the opposition to hear these valuable and constructive comments, we all must restrain ourselves lest our message get lost.

Make no mistake, this is a matter that is front and centre. Robotic, indeed cognitive, systems are being deployed in theatres of war. They are being armed and they are being used to “mitigate threats.” (Ah, military euphemism...) This is a topic that is very real and very active. War itself is a serious topic indeed. In this case the task at hand is to resolve that it is right and just to use robots in war. Let us step back a moment and not fool ourselves. The concept of a righteous war got great mileage during the crusades. It is a luxury that has gotten heady use from long before the crusades and to the present day with clerics and others extolling the virtue of smiting our opponents. But this luxury is a something that we can not afford in any effective discussion of war. The righteousness of a war is independent of the use of robotics to execute it. A “Just” war? There are many who claim it exists but let us look at the one element that will forever prevent any true “just” war so long as it exists on the battlefield: humanity.

It is the human element that truly prevents a just war. There is no war where human weakness, bigotry, ignorance or other very human feature has not caused calamity.

We need not look far in either space or time to find examples where the frailty and failings of man have lent a vicious and atrocious cast to what is already a terrible pursuit – Where the subhuman in all of us rises to the surface in the haze and crash of battle. It is these cases where the argument for widespread use of robots in conflict, and indeed the reduction in human warfighters wherever possible, makes itself. Take for example:

My Lai, Vietnam. A Company of US soldiers moves in to clear out several villages suspected of housing Viet Cong that had beset them with IEDs and booby traps. On finding no evidence whatsoever of combatants, the soldiers proceeded to kill, torture and rape hundreds of women, children and the elderly over several days as the momentum of atrocity and mob violence took over the drug-addled minds of the poorly trained foot soldiers.

Wouldn’t it have been far better had the squad sent in to clear the villages been crewed with cool, impassionate robots acting only under the rules of engagement?

Likewise, more recently in Haditha, Irag. A group of US marines, after having been attacked with an IED that killed one of their number, promptly go through a series of houses killing everyone inside, totalling 24 dead. In the course of the retribution attack they even stop a taxi, pulling out all the occupants and killing them. Only a single weapon was found and it was a common rifle that had not been fired.

Would a team of cold algorithmic machines have executed a revenge attack on innocent civilans simply out of rage and the impotence of being unable to strike back? I think not.

Another, slightly different example: The killings of 4 Canadian soldiers by a US fighter pilot at Tarnak farms near Kandahar, Afghanistan. The Canadians where taking part in a live fire exercise and had notified and received clearance from all relevant authorities. The F16 pilot had just completed a 10hour mission without incident and was exhausted and over-eager to engage something. All communications to the pilot regarding whether to attack were cautious at worst and outright negative at best. The pilot, delirious with exhaustion, hopped up on adrenaline and with an over-eager need to “blow something up,” heard “permission to fire” at each word. He dove in and released a bomb containing 500lbs of explosive amongst the Canadians, killing 4 and injuring 8. He claimed he felt his flight leader was under attack. They were flying 7km above the ground and the “attacking fire” was anti-tank and machine gun test fire at a firing range.

Wouldn’t it have been nice if that aircraft had been piloted by the untiring, unblinking, and unemotional brain of a machine?

These are just a few of the cases where the passion of man has been the needless cause of great suffering and it where it’s absence would have been a great relief. There are many, many more. As I said, these populate every conflict, from the “just” war against Nazi aggression that resulted in, for example, many reports of rape in France in the weeks following the D-Day invasion, to the vicious quagmire that is the Democratic Republic of Congo where rape, torture, drug abuse, dismemberment, voodoo and cannibalism are text from the combatants “quick start” guide.

Now, these are cases where robotic combatants would have prevented massacres and deaths in conflicts. There are also a great many examples highlighting where robots would have prevented the escalation in the first place.

Take, for example, Mogidishu, Somalia in the events popularised by “Blackhawk Down.” With US forces being sent in under a bizzarely screwed up UN mandate and being forced to decide between getting killed or firing into human shields. Would it not have been nice to have the luxury of surrendering the field while a loitering drone is able to engage the perpetrators hours later, without nearly the risk of innocent death?

Or even more recently consider the capture of the US RQ-170 surveillance UAV in Iran. It is clear that the decisions regarding further incursions and escalations would have been far more difficult had the aircraft had 24 highly skilled, highly trained and highly knowledgeable aircrew on board as in the Hainan Island incident with China.

Given these, it is clear that a “just” war can only waged in the absence of emotion, fear, bigotry and bias that humanity is rife with. Just think of the outcome had a drone strike been possible in 1940.

I’m sure that my opponents will whine about the lack of the human element in war. The lack of an emotional decision maker at the trigger, an empathetic mind behind the sword. A sentiment echoed in the movie “i, Robot” by the foolish Lt John Bergin when reminiscing about the “good old days” when “people were killed by other people.”

Make no mistake, aside from the ludicrous nature of the logic, empathy is an element that we have very effectively removed from our “human” warfighters.

During WWII, the US observed that in battle between 70% and 90% of combatants did not fire their weapons – such was the instilled aversion to killing a human form. The military minds tasked themselves with improving that number. In Korea there was a substantial increase in those who pulled the trigger. By Vietnam, the number who fired their weapons had increased above 90%. As explained in the “Zombie Survival Guide” we have gotten extremely proficient at killing one another. We have well suppressed the aversion to killing and “intelligent, empathic decision makers” are not wanted, nor are they useful on the front lines. What then surfaces and is not so easily handled is our baser, vicious, animal instincts that are just under the skin, brought to the surface when war rips off the veneer of civilisation.

Another feeble argument ready to be disgorged by my opponent is the concept of fairness. Fairness? How is that ever a desirable trait in war? How is it beneficial to allow more deaths? How does giving an opponent equal opportunity to kill you in any way change the just cause of stopping genocide?

Fairness deals with symmetry, and that symmetry is what led to the cold war and all the fun that has arisen out of that most effective use of time and resources.

The goal is to engage and achieve objectives with the minimum of effort and damage. In nature the concept of “fairness” is as absent as the evidence of cognition on my oppositions’ face.

To close, warfare can only approach just and moral when the human element, that same element that brings us rape, murder, torture and oppression is removed. In short, a war using robots as the primary tool is the only way that a war will come close to being unconditionally, unequivocally just and right.

In response to my opponents arguments, I cannot rid myself of the words of Luke Skywalker speaking to his robot: “You’ve got something jammed in there real good...”

Another common point is that the use of remote weapons such as robots invites attacks on civilian targets. That is false. There is no shortage of claimed motivations for attacks against civilians both at home and abroad. It should be noted that at no time did Osama bin Laden justify the September 11 attacks because he didn’t like bomb disposal robots removing soviet era landmines and buried explosives. “America won't get out of this crisis until it gets out of the Arabian Peninsula” is the stated reason. The more fanatical the mind, the more ready the “invitations” and this is not contingent on robotics.

I had wanted to avoid venturing into the fantastical, but since that seems to be the sole domain of my opponents arguments, I feel obliged to follow suit. At least where, unlike my opponents, metaphor and fable shall serve to further logic.

We “debate” (and I use the term loosely) the justness and righteousness of robotic warfare where we pit robots against humans. However, let us take a moment to consider the concept of “total war” where the sum of a society is backing, and engaged in, the conflict. To make the case more readily consider this concept where we are not engaging another human nation or group but an alien entity. Would we be having this argument? Now consider the instance where it is clearly dangerous, inflammatory or decidedly undesirable to have human soldiers on the battlefield at all. Clearly, this would be a non-issue.

For a lark, let’s consider the repulsion of, for example, the zombie hordes. Any humans that come in contact whatsoever with the opponent is decidedly bad. It is clear that the use of robots should enjoy unanimous support from all those with the mental capacity to tie their shoe. Now, from this conceptual standpoint, how is it that there is any debate about using robots in real world (aka human) conflicts? If robots are the most effective tool to achieve the desired end, how can it not be desirable to field them? How does putting more humans into the maelstrom of lead and fire make anything better?

I think it fitting to bring up a point made by the distinguished intellectual, philanthropist, actor gentlemen, humourist, fighter, lover who has a long association (albiet unknown to him) to this lab. No, I’m not talking about the Dos Equis guy, but Hugh Jackman in Real Steel. He said “... the next logical step is to get the humans out of there and let the robots kill each other.” And isn’t that an ideal situation?

Finally, we can all agree that war is a terrible affliction that we, as humans, routinely visit upon ourselves. Given that no war is exempt from the kinds of atrocities that we all know occur but try to avoid thinking about, the only thing worse than a war is that war lasting longer. In such a situation is it not our perogative, nay our duty, to execute that war in as rapid a fashion as possible? In the case of robots, these are not only very effective tools for rapid execution but also a tool set that is far more discrete and provides far more options for those who are deciding the course.

In such a light, yes, of course it is just and right to use robots in war.

 

Robots (2005) [to Rodney] Tim the Gate Guard: Boy, when you pick a lost cause, you really commit.

 

Result:

Defender declared the winner by moderator, Professor Gabriele D'Eleuterio.