Personal Science & Physics Technology

Robot rights for the wrong reasons?

We’ve all seen them in the movies. We’ve seen them laying waste to cities in The Terminator, and we’ve seen them run amok in I, Robot, too.

We’ve all seen them in the movies. We’ve seen them laying waste to cities in The Terminator, and we’ve seen them run amok in I, Robot, too.

The fact of the matter is, no matter how close we might be to making a functioning, working robot, giving such a contraption the wherewithal to reason and to puzzle over matters philosophical and metaphysical is some way off.

From a technical standpoint, most of the pieces are in place to build a robot. However, from a computational point of view, the horizon is but a thin, thin line in the distance:

“The development of humanoid robots today focuses on three major areas: control of manipulators (what it can do); biped locomotion (how it can walk); interaction with humans.”

That last point is the one that’s the hardest part. That’s the part that requires of us to understand the human brain first before we can hope to replicate it’s finer, more noble properties.

We’re not there yet. In fact, we’re quite some way off.

However, despite the apparent contenders for the Turing Test prize, the Office of Science and Technology’s Foresight Centre commissioned a paper to anticipate the major social and technological trends of the next 50 years:

“The heavyweight philosophical missive behind robots’ rights is brought to you by Outsights, a management consultancy, and Ipsos MORI, the opinion poll organisation, the Financial Times reports. According to the authors: ‘If granted full rights, states will be obligated to provide full social benefits to [robots], including income support, housing, and possibly robo-healthcare to fix the machines over time.’”

While I have to applaud their foresight on the matter, I also have looked down upon these learned men with a furrowed brow with a wagging finger of displeasure, displeasure as a result of them thinking in such miserably straight lines.

Assuming these robotic citizens are suitably capable of living amongst us, how are we to presume that they would have any need of housing, income support or healthcare?

This is anthropomorphization at its worst. Might it not be unreasonable to presume to such beings need nothing of out societal faculties and instead choose some alternate means of support?

Fifty years is quite some time. In my opinion, plenty of time to figure out how to replicate the cognitive functions of the mammalian brain to mimic such high-level functions as logical reasoning, for example.

My dad asked me a great question, as only he can: “What do we want robots for?” And that’s a damned good question, too!

There are some sound reasons to surround ourselves with robots. For example, what of fire rescue, or any rescue situations where the environment is too hostile to the more fragile human frame?

The obvious application for robots is as the inevitable military ordnance, which is both ideal and lamentable in its predictability.

I, Robot. I think, therefore I need an oil change .. and put an umbrella in, hold the ice!

What concerns me is not the ethical issues of whether we should or shouldn’t build such mechanical automata, my concern is more centered around whether we will be able to create something smart enough to hold down a day job and a conversation and not be afflicted by emotions or the encumbrances of being self aware.

Based on the stuff I’ve been reading over the years in and around psychology and neuroscience, I get the feeling that there are no obvious ways of disentangling awareness from any other constituent part of the brain.

So for example, our capacity for pity, for compassion, altruism and the desire to protect our own might be spread hither & yonder among any number of lobes, nodules and flanges in the brain.

Let’s not forget, the human brain has been in the making for 4 million years. And the Mk. 1 model prior to that is still doing pretty well right now in the great apes, such as the Gorillas, Chimpanzees and Orangutans.

What if in creating a mind with the will to walk into a burning building with the sense to assess danger and the value of life, we also create a mind that’s also aware of the danger posed to its own life?

What then? Might the robot choose not to endanger its own life for the sake of a human? Might this robot value its life more highly?

In such a scenario, there is no need for such things as the robotic equivalent of social benefits, income support, housing and healthcare. Robots would be no less entitled than anyone among us that draws breath.

But then we have to deal head on with the deep, deep ethical issues of fashioning ‘artificial’ beings capable of rational thoughts, beings capable and all too willing to enforce whatever measures they deem fit in an act of self preservation.

We would have to recommit ourselves to the definition of life itself, be that life mechanical or made of flesh & bone…

By Wayne Smallman

Wayne is the man behind the Blah, Blah! Technology website, and the creator of the Under Cloud, a digital research assistant for journalists and academics.

Leave a Reply

Your email address will not be published. Required fields are marked *