Machines to Run Society?

Sum of all Thrills Robot Arm by tom.arthur

Many people have an aversion to automated machines and computers running things.

There are likely two reasons for this. The first is due to the lack of trust we place in automation. This is mainly due to their track record. Machines have proved themselves to be unreliable in the past, and need to do a lot to regain our trust.

Secondly, machines have been known to make mistakes that are "machine" in nature, exposing human qualities that we took for granted. In other words, they will miss seemingly obvious details, or make mistakes relating to the human experience.

So it's not surprising that people don't really trust machines yet. They're not as good as humans in some areas yet they have superseded us in others. This also leads to discontent when humans see their jobs disappear as a result.

It's a shame that machines have developed such a bad reputation, because they don't really deserve it. Most of their faults have been inherited from humans. There are several ways that humanity has let down its automated creations.

The first is the implications of planned obsolescence. Although high investment in automation can be justified by the promise of reduced ongoing labour costs, robotics industries are subject to the same factors as any other companies when it comes to manufacture. The company's future survival depends on the robotics they build breaking down, requiring some kind of ongoing maintenance, losing desirability, or being improved upon in the future; they simply couldn't exist without this. Planned obsolescence causes corners to be cut, features withheld and the cheapest parts to be used. My own experience in manufacturing will testify - this is the way all consumer manufacturing works.

Then there are limitations caused by the education of the engineers and the diligence of the programmers. They are humans of course, and subject to human error, laziness, lack of enthusiasm, and many other flaws. These flaws can easily be passed on and manifest themselves in a variety of ways, including unreliable artificial intelligence and failing to take into account unforeseen factors or complications.

In addition, machines have been held to higher than realistic expectations ahead of schedule. By the time their capabilities become what they were supposed to be, their reputation has already been tarnished. This is a social issue - the public and the media should be better informed as to the true capabilities of technology and their expectations not contorted in the name of publicity.

Then machines also have a bad reputation because they cost people their jobs. This is frowned upon because at the current level of technology most of us simply must work to live. But we should not fear the machines taking all the work.

There will always be something for us to do. As technology provides us with easier access to our basic needs, therefore reducing our dependence on money, we will see a shift into more creative, scientific, technical, social, and utilitarian roles. The machines will do all the mundane, repetitive work, freeing us up to have far more enjoyable careers, careers where we use our human minds to do jobs that simple machines cannot do. Whether we retain this reliance on work to survive or not - the face of work is changing dramatically as a result of automation. We should be happy.

Can we trust machine intelligence to make the right choices to run society?

So would politics be one of the jobs that machines can do, or should that be left to humans?

Firstly ask yourself, do you think that humanity has ever done a good enough job? Our history is littered with war and corruption and power lust, and organised politics is often based on simply fighting over resources. The current political system is based on absolute beliefs, having more in common with dogmatic religion than evolving, constantly re-evaluating science. It is also all-encompassing in scope - you vote for a whole package, even if you don't agree with all the components in that package.

The essence of politics is fundamentally complex. Our decisions are based on personal perceptions and biases, subject to irrational abstract morality, and continually failing to address the nature of change.

Change, I believe most people familiar with the work of Kurzweil would agree, is the only thing we can be sure of. As a political example, capitalism helped to create a boom in society but now automation may obsolete the general concept of labour, virtual reality may move our needs away from money and towards energy, and nanotechnology could change the way we think about property, ownership, and scarcity.

Where does that leave the idealism of free enterprise?

This is why politics cannot be run on ideals. People's ideals will always change, even what could be considered the "correct" ideal will be changed by its environment, or other external influences.

So essentially, any sort of idealism is an outdated way of running society. At its very essence, it fails fundamentally in its purpose of deciding what is best for us because it does not take into account the nature of change. Additionally, idealism by its nature is based on a single perception. A perception is subject to personal bias, morals, and most importantly, static. It is not "moral" for our fates to be decided with "moral" bias because morality is relative.

The only sustainable system of power is a scientific one; a "decision engine" without the bias of "perception" - a system that is able to incorporate change. Would a technical, scientific method of deciding how society should function be more objective? Should there even be objectivity in politics?

Any politician will know, even the simplest decisions are highly complex when people are involved. People function on the highly "unscientific" system known as emotion. Emotions are the driving force behind decisions and perspectives, not rationality. Politicians currently exist because there is a need to relate to people in an emotional way - but why should emotion play a part in how society is run?

If a car breaks down, the mechanics don't take an emotion-fuelled vote on how to fix it. They look at the problem and figure out how to solve it in a technical way. People differ from cars of course, their needs are based on complex emotions.

But here is the fundamental question. Would it not, at some point, be theoretically possible to deal with emotions with the same scientific objectivity as we would deal with any other technical problem?

We really need to ask ourselves if emotions should play a part in decisions on how society is run, or whether politics really should be objective.

If politics is supposed to be objective, it should be subject to scientific method and technical processes. Decisions could be irrelevant, with AI working to find adequate solutions to a problem rather than making a choice based on which side of the debate it wants to align itself with.

Of course, relying on such an impersonal way of making decisions could prove inhumane, perhaps even dangerous. Yet the current system will never reach any kind of equilibrium as long as there is change.

There will be issues at first of course. Computers are far from perfect. But by addressing the reasons for this, a much better environment can be fostered where automation can be more reliable, both from a physical and a decision making point of view.

Only when we are building automation and decision making computers on this basis can we be well placed enough to judge the real ability and adequacy of machine based political systems.

Comments

Darren Reynolds said…
I think you make a balanced, moderate case for a way forward in how decisions are made.

Today, far too many decisions made by our politicians are irrational and do not, in the long-run to us much good.

The best current example is drugs policy. All the evidence is telling us that we've got it wrong. Abusers keep taking heroin, keep placing a great burden on social and health care systems, not to mention the criminal justice system, and keep dying.

In the mean-time the drugs barons make all the profits, which are kept artificially high by the efforts of customs and police.

Properly conducted scientific analysis of the situation, such as that which previously was undertaken by Prof Nutt's team at the Advisory Council on the Misuse of Drugs, suggests our current policy is making things worse, not better. LSD and ecstasy are less dangerous than alcohol, and have potential therapeutic uses that are not explored because of prohibition.

It's the same with crime. Most crime is committed by people who have been to prison, which is a strong indicator that prisons broadly don't work either as a deterrent, or as rehabilitation. That's not to say all prisons don't work. Some great results are being achieved where the necessary resources are put in. Restorative justice produces much better results.

Yet the tendency is for people to want tougher, harsher sentences for ever more minor crimes, against the evidence.

Perhaps here is a case for hiring one of your machines.
Stu said…
2 excellent points about how current politics is getting it wrong. This is clear illustration that emotion probably shouldn't be making these kinds of decisions.

I guess the debate should incorporate the possibility that politics could become more objective, even if it's not machines making the decisions, but still humans. Handing the responsibilities to machines could be much further down the line, if ever.

Thanks for this further food for thought.
I can hear President Bender saying, "Kiss my shiny metal ass!" and then belching fire.

Machines could definitely speed up some processes of government but I think the final say for important issues should be left to humans.

I'm sure that well designed and carefully programmed AI units will be able to provide numerous logical options based upon reasoning based upon analysis of the pros and cons for various situations, and all this based on a set of logical rules and guidelines.

At the least, using AI in political affairs could help to evolve human thinking to be more logical rather than emotional about politics.

Maybe they could be programmed along this basic set of rules:


Neo-Tech.com (reprinted with permission):

The Constitution of the Universe (1976)

Preamble

*The purpose of conscious life is to live creatively, happily, eternally.

*The function of government is to provide the conditions that let individuals fulfill that purpose. The Constitution of the Universe guarantees those conditions by forbidding the use of initiatory force, fraud, or coercion by any person or group against any individual.

* * *

Article 1

No person, group of persons, or government shall initiate force, threat of force, or fraud against any individual's self or property.

Article 2

Force is morally-and-legally justified only for protection from those who violate Article 1.

Article 3

No exceptions shall exist for Articles 1 and 2.

* * *

Premises

1. Values exist only relative to life.

2. Whatever benefits a living organism is a value to that organism.

3. Whatever harms a living organism is a dis-value to that organism.

4. The value against which all values are measured is conscious life.

5. Morals apply only to conscious individuals.

* Immoral actions arise (1) from individuals who harm others through force, fraud, or coercion and (2) from individuals who usurp, degrade, or destroy values created or earned by others.

* Moral actions arise from individuals who honestly create and competitively produce values to benefit self, others, and society.
Darren Reynolds said…
The constitution might perhaps be more simply put as, "Thou shalt be nice unto one another".

The trouble with codes of law is that they are always open to interpretation, are frequently contradictory and require value judgements.

In the constitution, what counts as "value" or "dis-value"? If someone takes some of my food, is that a dis-value? But what if I'm grossly overweight and at risk of being chased by lions? What counts as 'property'? If I walk on a patch of land, is it my property? Or do I have to sleep on it? Or sleep on it every day for several weeks? Or put a fence round it? What if unbeknown to me someone else has been doing that for years, gone away for a while, and comes back? What if he never comes back? Is it his, or mine?

It's easy to contrive other, probably better examples, but the point is that whatever the legal system, the arguments arise when there are differences in interpretation or contradictory rules. The only way to avoid such differences is to specify completely every situation that could possibly arise, which a bit of maths will tell you requires more energy to do than than there actually is.

And of course, once there is a machine capable of making these value judgements for us, we have a machine that frankly isn't going to give two hoots, much as we don't sort out the squabbles between two crows over a worm.

Popular posts from this blog

The Ethical Implications of Dismantling the Planet Mercury

Guns Might Be the Least of Our Worries

Living Longer - A World of Wisdom?