Tuesday, 14 September 2010

How Designer Babies Highlight Society's Immaturity

The question of designer babies is usually met with disdain. You don't even have to be religious to object to the idea of customising a human before it's born. Indeed, this concept doesn't just "go against nature", it makes us question what it means to be human.

The possibility of customising an embryo with the view to having an "enhanced" child opens up a veritable test tube of questions. What are the implications of being able to set a child's intelligence, their strengths, their abilities?

Then there is the questions that really hit a nerve: "Would people chose not to have a black baby when they know it will be subject to persecution and prejudice?" The whole issue is surrounded by frightening dilemmas.

The problem is, it's already here. We currently screen embryos for birth defects such as spina bifida, and many would argue that prevention or removal or deficiencies is a form of enhancement.

Of course, we can try to separate prevention of negative from implementation of positive. Then maybe the fascists - I mean conservatives - among us could make a law preventing any form of positive enhancement - but would that be ethical? If we have the potential to allow someone to have a 500 year lifespan - is it right to withhold that from them before they're born and can make a choice about it?

Dr Robert Sparrow makes the profound observation that a child can never reprimand its parents for not enhancing them - because if they had, it would have meant choosing a different embryo and the child wouldn't have been born anyway. However, this of course is only true for embryo screening, so is a bit of a short term argument and, in my opinion, a moot point.

Laws are like band-aids on cancer

I frequently point out on this blog and elsewhere that laws and restrictions are not solutions under any circumstances, and this is especially true when it comes to technology and its ability to undermine and disrupt our paradigms. Attempts to control by prohibition are primitive, ineffective, often un-ethical, they have unforeseen and unrelated side effects, and are usually done for the wrong reasons. This issue of designer babies and human enhancement needs far more thought than that which can be provided by narrow-minded rule-setting  waste-of-space bureaucrats.

We went past the point long ago when lawmakers were able to anticipate and knowledgeably counteract dangers arising from technological developments. Technology is enabling these society-altering options not only at a pace that can't be kept up with, but that can't really be understood. They change our paradigms yet we attempt to create rules based on the old ones. Just look at the feeble attempts to control the Internet as a prime example.

If we get this right, we could have a society of healthy, intelligent, long living (and therefore possibly wise) super humans. With this being the potential, how can we ever hope to keep it at bay forever?

Turning what we know on its head

When significant pre-birth human enhancement does arrive, there are still many ethical issues and implications we're going to face, and we need to be thinking about them now. For example, it probably won't take us long to acquire a disdain for anything "less than perfect". While some definitions of perfection will obviously vary, some won't - a longer lifespan and a higher intelligence will be desirable to most people - even if they chose not to use them.

Will we see a separation of the "enhanced" and "non-enhanced" - as if we don't have enough excuses to hate each other - or will the "none-enhanced" simply be subjected to the peer pressure similar to that of mobile phone ownership? Either way, such enhancements would need to be affordable to the masses. Otherwise, we have another issue:

A Right or a Privilege?

What effect will economics have on designer children? Especially in countries with no socialised health care - it's likely that some of the enhancements will be the sole benefit of those with money, perhaps further exacerbating the wealth gap. If it's morally imperative not to withhold enhancement - how does this fit in with the monetary system?

Isn't being born healthy everyone's birthright in a civilised society? Or does that depend on the financial cost? (How exactly do we set the definition of healthy?) If it's not economically viable to give all desired enhancement to everyone - we will almost certainly end up with humans of varying levels of enhancement.

This will be significant, because among other things, it will affect dynamic of the workforce. Those without enhancement because they started off poor would only be able to get the lower paid jobs (if any at all) because of their "disability".

In the meantime, those with enhancement will have certain advantages. Suppose we breed one person who is more intelligent and charismatic than anyone on the planet - and they ran for president? Firstly, this intelligence could give them an unfair(?) advantage over all other human beings, but secondly, why shouldn't they be in charge, if they're likely to do a better job than anyone else?

A Real Game Changer

I could probably expand on these ethical dilemmas all day. But the common denominator is that our current systems, our current ways of thinking, aren't really compatible with our expanding options. Just as nanotechnology might undermine scarcity, and virtual reality might undermine our entire physical reality, "designer babies" open up our world to a host of new possibilities - many of which we are just not set up for. These possibilities will force us to question our deep rooted beliefs and turn our society upside down.

Because of the effect on the individual, it's likely that this will be the tipping point - the point where our advances shift the balance of power from politics to technology.


Image by 5election.com

Wednesday, 8 September 2010

Could Artificial Intelligence Development Hit a Dead End?

Kurzweil and his proponents seem to be unshakable in their belief that at some point, Advanced Artificial General Intelligence, Machine Sentience, or Human Built Consciousness, whatever you would like to call it, will happen. Much of this belief comes from the opinion that consciousness is an engineering problem, and that it will, at some point, regardless of its complexity, be developed.

In this post, I don't really want to discuss whether or not consciousness can be understood, this is something for another time. What we need to be aware of is the possibility of our endeavours to create Artificial Intelligence stalling.

Whatever happened to...Unified Field Theory?

It seems sometimes, the more we learn about something, the more cans of worms we open, and the harder the subject becomes. Sometimes factors present themselves that we would not have expected to be relevant to our understanding.

Despite nearly a century of research and theorizing, UFT remains an open line of research. There are other scientific theories that we have failed to completely understand, some that have gone on for so long that people are even losing faith in them, and are no longer pursuing them.

Whatever happened to...The Space Race?

Some problems are just so expensive that they are beyond our reach. While this is unlikely to be true forever, it could have a serious and insurmountable effect on Artificial Intelligence development. Exponentially increasing computer power and other technology should stop this being a problem for too long, but who knows what financial, computing, and human resource demands we will find ourselves facing as AI development continues.

Whatever happened to...Nuclear Power?

Some ideas just lose social credibility, and are then no longer pursued. If we are able to create an AI that is limited in some way and displays a level of danger that we would not be able to cope with if the limitations were removed, it's most likely that development will have to be stopped, either by government intervention or simply social pressure.


I think it's unlikely that the progress of anything can be stopped indefinitely. It requires definite failure by an infinite number of civilisations. Anyone familiar with the Fermi Paradox and the "All species are destined to wipe themselves out" theory will have a good understanding of this concept. 100% failure is just not statistically possible indefinitely when it depends on a certain action not being performed.

However, it is certainly likely that our progress will be stumped at some point. Even with the accelerating nature of technology, this could cause an untold level of stagnation.

We should try and stay positive of course, but it would be naive to ignore the chance that, for some time at least, we might fail.


I'm currently attending the Singularity Summit AU in Melbourne. There were a couple of talks on Tuesday night and there will be a whole weekend of fun starting on Friday night. :) Therefore you can expect a few posts to be inspired from my conversations with other future-minded thinkers over the coming days!

image by rachywhoo

Monday, 26 July 2010

Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Friday, 9 July 2010

No Going Back

"I've lost everything, my business, my property and to top it all off my lass of six years has gone off with someone else."


The concept of perpetual association, the "permanent record", causes despair in people's lives every day, although we don't hear about it unless they decide to make sure we hear about it.

How can we blame people for going psycho when a criminal record stands in the way of their entire future, giving them nothing left to live for?

It's time to acknowledge and address the implications of Actuarial Escape Velocity in respect to crime and punishment. For, with infinite lifespans, ruining people's lives will not only have much greater significance, but it will not be in the interests of society. Who wants their infinite lifespan cut short by a crazy gunman?

There seems to be this incredibly misguided notion that all criminals are evil, they're born evil, and they will always be evil. Quite apart from the fact that most crimes are not dangerous or violent and exist only as a attempt to prevent further such "crimes", how can we say that people won't change for the better? Furthermore, how can we say that people who have never committed a crime will never do so?

Strangely enough, our current methods of locking people up with criminals to stop them being criminals isn't really working. Even taking the possibility of indefinite lifespans out of the equation, this insanity needs to be addressed. However, whatever the punishment, preventative measures, or rehabilitation methods, the question remains - how do we deal with the past when the future is infinite?

The problem is complex. Can someone who has committed cold blooded murder ever be trusted again? What if hundreds or even thousands of years have passed? Who knows what this trauma can do to a person, even in the extreme long term. Does it even matter, if they have been rehabilitated?

Then we throw another issue into the mix. Identity is something that even now is losing its meaning. When we can change our faces and official records, it's one thing. When we can change our entire bodies, it will be something else. When we can change our actual minds, our thoughts, memories, personalities, and emotions, then things will get considerably more complicated.

When life extension becomes a reality, we will have many questions to ask. One question that will become increasingly important is: "How important is the past?" We'll all have one, but can enough time turn us into someone else?

Wednesday, 23 June 2010

Why Are Dreams So Strange?

The mind is curious thing. We believe the perceptions from our awakened state to be the ultimate reality - the be all and end all of our existence. The way things work in our day to day world, our interactions, our actions, their implications, and our understandings, appear to be the true representations of reality.

So when we sleep, why do we not question the random strangeness and non-realities that perpetuate our slumber? What causes dreams to be so bizarre compared to our awoken reality?

Dreams make no sense

It's not just a sensory thing. In dreams, even concepts are twisted and stretched and mixed together. I was once in a state of half-asleep dozing and had someone dictate what I was saying - and it was complete nonsense, quite hilarious with absolutely no reference to my experiences or thoughts of the day (or reality in general). Dreams seem to be a regurgitation of our minds but absent of any of the framework to hold it together. This made me wonder. Did humans dream before we became "intelligent"?

Research points to the fact that dreams come from implicit memories and the neo-cortex rather than declarative memories from the hippocampus. In other words, dreams are formed by abstract concepts rather than definitive memories of situations. This would explain why these concepts often don't fit in with the reality we are used to, yet they do have some grounding in this reality.

Consciousness within the dream world

With lucid dreaming, we can train our minds to recognise the strange differences between awake and asleep, and then initiate our consciousness while remaining in the strangeness of the dream. So this shows that consciousness is not necessarily a factor in the "falseness" of dreams, as many people can even control the reality of their dreaming state.

If consciousness can control our reality in our dreams, then how are our dreams any different from our awakened state? How do we know which reality (if any) is the right one?

It seems it is our perceptions, rather than our environments, that define our reality.

Thursday, 25 March 2010

Systems of Complexity

If we were to replace our bodies, one atom at a time, would we be the same person? One would think this would be the case. Every 10 years, every cell in our body will have been replaced at least once, with bone marrow taking the longest to renew. Most of the body renews every 7 years.

Our bodies are an ecosystem not unlike any other. Take the sea – remove and replace it one atom at a time and no fish will notice. Replacing larger pieces will cause problems for its inhabitants, but it will soon renew itself. Replace a large proportion, and this will likely have huge implications for the entire ocean. As it is with humans, replacing one small section at a time would be easily accounted for and would not have any dramatic effect on the system as a whole.

This is a dramatic realisation – for what are we if not our bodies?

We are not single entities. We are systems, and we are made from smaller systems, which in turn are made from smaller systems. Cells take in matter from our food and convert these molecules to be part of us, replacing extinguished cells. We are not the matter that constitutes our body – we are its collection of systems.

The brain, the place that for some reason is believed to house the “mind”, is almost certainly more than just a material structure. As yet we have failed to deconstruct it to any significant level, but we do know that its functionality relies to some extent on electrical configurations. However, the brain is not some simple electrical circuit which can be reverse engineered by simply following current paths and measuring voltages.

Brain operations are less logical, hiding their true functionality in the encoding of patterns. It’s these patterns that are more of an accurate reflection of who we are.

In fractals, an equation determines a configuration that is iterated. This type of pattern, known as self similarity, is an underlying mechanism of nature. Using fractal equations, we can now work out how many leaves and how much carbon dioxide a tree will create. It is the "DNA" of reality.

We are all connected

We should also remember that while we are made up of systems, we ourselves are composite parts of a larger system, the ecosystem of the universe. While we may not feel that we’re “connected” with the Earth or the Sun or the Andromeda Galaxy because we see no physical connection, we are connected in a scientific and logical way.

All atoms are surrounded by orbiting electrons which by definition are negatively charged, meaning that every atom in the universe repels every other atom – in other words – you never actually touch anything. So, your body is not even connected to itself, yet it is, albeit by magnetic forces. Therefore, we are all just as connected to the entire planet – and each other, as we are our own arms.

Through the vacuum of space, the magnetic forces continue but weakly, while gravity takes over to keep us connected to the rest of the universe. And every day we are learning more about how the universe is constructed, discovering phenomena such as dark matter that continue to reinforce our connected nature.

As well as our scientific connection, from a logical perspective, we are also as much a part of the universe as it is of us. Our actions affect the universe around us, and we enjoy the benefits or suffer the consequences of these actions accordingly. We rely on our surroundings to survive. The only thing holding these implications away from us is time. While we may not see the implications of our actions personally, they echo into the universe, which we are part of. Karma, in essence, is real.


So we could end it here on “we are all one”, but if that were the case, why do we all have minds that “feel” like they are separate? Is it an evolutionary accident or is there some divine purpose to our individual consciousnesses?

Perhaps individuality is a deliberate outcome of evolution, a mechanism to bring about the most efficient thought system possible? There is no doubt that humans have the ability to take over from evolution now, increasing the “power” of our consciousness, our life spans, and the efficiency of our resource usage to drive our own destinies.

Following this thought-train, we could provoke more questions than answers. Is consciousness determined by individuality? Could there be alien species that evolved without any concept of individuality? This would depend on what would be the best evolutionary advantage.

The big question, is could this individuality be an illusion, created in our own minds? We are, after all, not one entity, but a collection of systems and a system within a larger system.

This begs the question, is a brain the pre-requisite for consciousness, or could consciousness evolve from any collection of systems complex enough, for example, artificial software, a complex cell, or even a star? We are, after all, just different versions of the same kind of fractal patterns that make up all of nature.

What if we are just the dreams of stars?

Further reading


Monday, 22 March 2010

Is Google too Big? Size isn't important, it's what you do with it that counts

There's no doubt that Google is the "Ford" of the day, pioneering a new industry which is changing our lives on a fundamental level. With this in mind, it was only a matter of time before this monopolistic driving of our destinies was called to question.

I recently attended a debate held by Spiked which asked the question "Has Google got too big?"

As a debate, it was relatively tame, given that no one person was strongly on the side of either "yes" or "no". However, this was due mainly to the complexity of the question, so as a discussion, it became rather in depth.

Size Doesn't Matter

Proponents of Google tried to void the argument, pointing out that the use of the adjective "big" was irrelevant, and that size had no implications, and that we should be asking ourselves whether they are "good" or "evil". While this is true, there's no doubt that Google's size is intimately connected to its "morality".

Legally in the European Union, competition laws state that a company should "compete on its merits". Now, law is pretty complex, but from my understanding, this should technically forbid a company using its own assets to put itself at an advantage; as a Google example, manipulating search results to put itself higher. If anything, this would be far too obvious and one would expect Google to use much more sophisticated and subtle techniques to manipulate its advantage, if it wanted to.

With Google having its fingers in so many online pies, it has a kind of power that is not directly proportional to its size. The possibilities presented by the data it collects combined with the sway it has in the public arena are largely unexplored. Really, its size is irrelevant. It could fire all but a few of its employees tomorrow and still be capable of some truly world-changing actions.

Knowledge is Power

Many people are concerned with Google's supposed disregard for privacy, but while this is a concern, there is more to worry about here. Rather than how it might customise the experience of individuals, we should be thinking about what it could do with this tremendous amount of data if it used it to analyse or even manipulate society as a whole.

With such an extensive array of demographic data, Google could easily draw some fairly accurate pictures or even predictions of society. This may have unforeseen implications. Analysis of such an immense cross section of the population, its habits, its fears, its preferences, its cultures, its desires, its beliefs, its strengths, its weaknesses, and much more, could reveal revelations.

We've already seen how Google Maps and Google Street putting information in the public view can create controversy. These are single services. Analysing the relationships of the data they have available, Google could use this information to reveal unfairness, incompetence, conflicts of interests, corruption, or worse, that could have devastating consequences for those involved, or even put those affiliated with Google at a political advantage.

With growing data about resources and their usage, combined with complex analysis tools and a whole world worth of technical data and reports, Google could easily obtain scientific advantages.

Worst of all, is their potential political influence. Even now, before their entry into the world of media, they could use their knowledge of current events and news to undermine government authority, manipulate opinions, actions, and even votes, making the political spin of the movie Wag the Dog seem like child's play.


There was much discussion at the debate about Google's level of innovation, how it is intimidating because there is so little innovation elsewhere, and how they knocked Microsoft off their perch with this, rather than anti-competition legislation. Therefore it should not be down to laws to challenge Google's dominance, instead, it would be better to see be more newcomers to challenge it (and the laws are ineffective anyway).

I would have to agree with this. It's not outside the realms of possibility that another company can come along and take Google down a peg or two. Let's not forget Facebook, the number 2 site in the world, and the fact that in some areas of the world, Google is barely used.

Google should be praised for their innovation and not penalised for it with petty, jealous rules which stunt development and progress. Yet at the same time, we should be extremely wary of the power it can bestow on itself with such innovation.

It seems Google finds itself in this unique position because of innovation and ingenuity. The power is no longer with those who take it by irrelevant means of violence, guilt, social manipulation or political popularity. About time, too.

"The empires of the future are the empires of the mind" - Albert Einstein

Saturday, 20 March 2010

Machines to Run Society?

Sum of all Thrills Robot Arm by tom.arthur

Many people have an aversion to automated machines and computers running things.

There are likely two reasons for this. The first is due to the lack of trust we place in automation. This is mainly due to their track record. Machines have proved themselves to be unreliable in the past, and need to do a lot to regain our trust.

Secondly, machines have been known to make mistakes that are "machine" in nature, exposing human qualities that we took for granted. In other words, they will miss seemingly obvious details, or make mistakes relating to the human experience.

So it's not surprising that people don't really trust machines yet. They're not as good as humans in some areas yet they have superseded us in others. This also leads to discontent when humans see their jobs disappear as a result.

It's a shame that machines have developed such a bad reputation, because they don't really deserve it. Most of their faults have been inherited from humans. There are several ways that humanity has let down its automated creations.

The first is the implications of planned obsolescence. Although high investment in automation can be justified by the promise of reduced ongoing labour costs, robotics industries are subject to the same factors as any other companies when it comes to manufacture. The company's future survival depends on the robotics they build breaking down, requiring some kind of ongoing maintenance, losing desirability, or being improved upon in the future; they simply couldn't exist without this. Planned obsolescence causes corners to be cut, features withheld and the cheapest parts to be used. My own experience in manufacturing will testify - this is the way all consumer manufacturing works.

Then there are limitations caused by the education of the engineers and the diligence of the programmers. They are humans of course, and subject to human error, laziness, lack of enthusiasm, and many other flaws. These flaws can easily be passed on and manifest themselves in a variety of ways, including unreliable artificial intelligence and failing to take into account unforeseen factors or complications.

In addition, machines have been held to higher than realistic expectations ahead of schedule. By the time their capabilities become what they were supposed to be, their reputation has already been tarnished. This is a social issue - the public and the media should be better informed as to the true capabilities of technology and their expectations not contorted in the name of publicity.

Then machines also have a bad reputation because they cost people their jobs. This is frowned upon because at the current level of technology most of us simply must work to live. But we should not fear the machines taking all the work.

There will always be something for us to do. As technology provides us with easier access to our basic needs, therefore reducing our dependence on money, we will see a shift into more creative, scientific, technical, social, and utilitarian roles. The machines will do all the mundane, repetitive work, freeing us up to have far more enjoyable careers, careers where we use our human minds to do jobs that simple machines cannot do. Whether we retain this reliance on work to survive or not - the face of work is changing dramatically as a result of automation. We should be happy.

Can we trust machine intelligence to make the right choices to run society?

So would politics be one of the jobs that machines can do, or should that be left to humans?

Firstly ask yourself, do you think that humanity has ever done a good enough job? Our history is littered with war and corruption and power lust, and organised politics is often based on simply fighting over resources. The current political system is based on absolute beliefs, having more in common with dogmatic religion than evolving, constantly re-evaluating science. It is also all-encompassing in scope - you vote for a whole package, even if you don't agree with all the components in that package.

The essence of politics is fundamentally complex. Our decisions are based on personal perceptions and biases, subject to irrational abstract morality, and continually failing to address the nature of change.

Change, I believe most people familiar with the work of Kurzweil would agree, is the only thing we can be sure of. As a political example, capitalism helped to create a boom in society but now automation may obsolete the general concept of labour, virtual reality may move our needs away from money and towards energy, and nanotechnology could change the way we think about property, ownership, and scarcity.

Where does that leave the idealism of free enterprise?

This is why politics cannot be run on ideals. People's ideals will always change, even what could be considered the "correct" ideal will be changed by its environment, or other external influences.

So essentially, any sort of idealism is an outdated way of running society. At its very essence, it fails fundamentally in its purpose of deciding what is best for us because it does not take into account the nature of change. Additionally, idealism by its nature is based on a single perception. A perception is subject to personal bias, morals, and most importantly, static. It is not "moral" for our fates to be decided with "moral" bias because morality is relative.

The only sustainable system of power is a scientific one; a "decision engine" without the bias of "perception" - a system that is able to incorporate change. Would a technical, scientific method of deciding how society should function be more objective? Should there even be objectivity in politics?

Any politician will know, even the simplest decisions are highly complex when people are involved. People function on the highly "unscientific" system known as emotion. Emotions are the driving force behind decisions and perspectives, not rationality. Politicians currently exist because there is a need to relate to people in an emotional way - but why should emotion play a part in how society is run?

If a car breaks down, the mechanics don't take an emotion-fuelled vote on how to fix it. They look at the problem and figure out how to solve it in a technical way. People differ from cars of course, their needs are based on complex emotions.

But here is the fundamental question. Would it not, at some point, be theoretically possible to deal with emotions with the same scientific objectivity as we would deal with any other technical problem?

We really need to ask ourselves if emotions should play a part in decisions on how society is run, or whether politics really should be objective.

If politics is supposed to be objective, it should be subject to scientific method and technical processes. Decisions could be irrelevant, with AI working to find adequate solutions to a problem rather than making a choice based on which side of the debate it wants to align itself with.

Of course, relying on such an impersonal way of making decisions could prove inhumane, perhaps even dangerous. Yet the current system will never reach any kind of equilibrium as long as there is change.

There will be issues at first of course. Computers are far from perfect. But by addressing the reasons for this, a much better environment can be fostered where automation can be more reliable, both from a physical and a decision making point of view.

Only when we are building automation and decision making computers on this basis can we be well placed enough to judge the real ability and adequacy of machine based political systems.

Tuesday, 9 February 2010

Earth 2.0 - The Movie

"For the last 8000 years human history has been guided by Earth 1.0, an operating system dependent upon the relentless exploitation of both people and planet alike. Earth 1.0 promotes an obsession with money, profit and personal advantage. Earth 1.0 is sustained by artificial boundaries and stagnant institutions – all held in place by carefully designed weapons of mass destruction. Earth 1.0 cultivates ecological insensitivity and an unhealthy estrangement from the rest of the biosphere – so much so that the very integrity of the web of life has been compromised. In short, Earth 1.0 is corrupt and unsustainable.

In contrast, the operating principles of the all new Earth 2.0 upgrade are based upon global co-operation – between one another and with the rest of the web of life. Earth 2.0 promotes the dissolution of artificial boundaries and the creation of a sustainable human culture in accord with the rest of Nature."

"Symbiosis and cybernation will become buzzwords for the new paradigm."

Find out more at the official Blog of the Under-Construction Movie here: http://earth2movie.blogspot.com/