Thursday, 25 March 2010

Systems of Complexity


If we were to replace our bodies, one atom at a time, would we be the same person? One would think this would be the case. Every 10 years, every cell in our body will have been replaced at least once, with bone marrow taking the longest to renew. Most of the body renews every 7 years.

Our bodies are an ecosystem not unlike any other. Take the sea – remove and replace it one atom at a time and no fish will notice. Replacing larger pieces will cause problems for its inhabitants, but it will soon renew itself. Replace a large proportion, and this will likely have huge implications for the entire ocean. As it is with humans, replacing one small section at a time would be easily accounted for and would not have any dramatic effect on the system as a whole.

This is a dramatic realisation – for what are we if not our bodies?

We are not single entities. We are systems, and we are made from smaller systems, which in turn are made from smaller systems. Cells take in matter from our food and convert these molecules to be part of us, replacing extinguished cells. We are not the matter that constitutes our body – we are its collection of systems.

The brain, the place that for some reason is believed to house the “mind”, is almost certainly more than just a material structure. As yet we have failed to deconstruct it to any significant level, but we do know that its functionality relies to some extent on electrical configurations. However, the brain is not some simple electrical circuit which can be reverse engineered by simply following current paths and measuring voltages.

Brain operations are less logical, hiding their true functionality in the encoding of patterns. It’s these patterns that are more of an accurate reflection of who we are.

In fractals, an equation determines a configuration that is iterated. This type of pattern, known as self similarity, is an underlying mechanism of nature. Using fractal equations, we can now work out how many leaves and how much carbon dioxide a tree will create. It is the "DNA" of reality.

We are all connected

We should also remember that while we are made up of systems, we ourselves are composite parts of a larger system, the ecosystem of the universe. While we may not feel that we’re “connected” with the Earth or the Sun or the Andromeda Galaxy because we see no physical connection, we are connected in a scientific and logical way.

All atoms are surrounded by orbiting electrons which by definition are negatively charged, meaning that every atom in the universe repels every other atom – in other words – you never actually touch anything. So, your body is not even connected to itself, yet it is, albeit by magnetic forces. Therefore, we are all just as connected to the entire planet – and each other, as we are our own arms.

Through the vacuum of space, the magnetic forces continue but weakly, while gravity takes over to keep us connected to the rest of the universe. And every day we are learning more about how the universe is constructed, discovering phenomena such as dark matter that continue to reinforce our connected nature.

As well as our scientific connection, from a logical perspective, we are also as much a part of the universe as it is of us. Our actions affect the universe around us, and we enjoy the benefits or suffer the consequences of these actions accordingly. We rely on our surroundings to survive. The only thing holding these implications away from us is time. While we may not see the implications of our actions personally, they echo into the universe, which we are part of. Karma, in essence, is real.

Individuality

So we could end it here on “we are all one”, but if that were the case, why do we all have minds that “feel” like they are separate? Is it an evolutionary accident or is there some divine purpose to our individual consciousnesses?

Perhaps individuality is a deliberate outcome of evolution, a mechanism to bring about the most efficient thought system possible? There is no doubt that humans have the ability to take over from evolution now, increasing the “power” of our consciousness, our life spans, and the efficiency of our resource usage to drive our own destinies.

Following this thought-train, we could provoke more questions than answers. Is consciousness determined by individuality? Could there be alien species that evolved without any concept of individuality? This would depend on what would be the best evolutionary advantage.

The big question, is could this individuality be an illusion, created in our own minds? We are, after all, not one entity, but a collection of systems and a system within a larger system.

This begs the question, is a brain the pre-requisite for consciousness, or could consciousness evolve from any collection of systems complex enough, for example, artificial software, a complex cell, or even a star? We are, after all, just different versions of the same kind of fractal patterns that make up all of nature.

What if we are just the dreams of stars?

Further reading

http://library.thinkquest.org/26242/full/ap/ap15.html
http://en.wikipedia.org/wiki/Fractal

Images courtesy
Jsome1 and carlitosway85
http://commons.wikimedia.org/wiki/File:Messier51_sRGB.jpg

*







Monday, 22 March 2010

Is Google too Big? Size isn't important, it's what you do with it that counts

There's no doubt that Google is the "Ford" of the day, pioneering a new industry which is changing our lives on a fundamental level. With this in mind, it was only a matter of time before this monopolistic driving of our destinies was called to question.

I recently attended a debate held by Spiked which asked the question "Has Google got too big?"

As a debate, it was relatively tame, given that no one person was strongly on the side of either "yes" or "no". However, this was due mainly to the complexity of the question, so as a discussion, it became rather in depth.

Size Doesn't Matter

Proponents of Google tried to void the argument, pointing out that the use of the adjective "big" was irrelevant, and that size had no implications, and that we should be asking ourselves whether they are "good" or "evil". While this is true, there's no doubt that Google's size is intimately connected to its "morality".

Legally in the European Union, competition laws state that a company should "compete on its merits". Now, law is pretty complex, but from my understanding, this should technically forbid a company using its own assets to put itself at an advantage; as a Google example, manipulating search results to put itself higher. If anything, this would be far too obvious and one would expect Google to use much more sophisticated and subtle techniques to manipulate its advantage, if it wanted to.

With Google having its fingers in so many online pies, it has a kind of power that is not directly proportional to its size. The possibilities presented by the data it collects combined with the sway it has in the public arena are largely unexplored. Really, its size is irrelevant. It could fire all but a few of its employees tomorrow and still be capable of some truly world-changing actions.

Knowledge is Power

Many people are concerned with Google's supposed disregard for privacy, but while this is a concern, there is more to worry about here. Rather than how it might customise the experience of individuals, we should be thinking about what it could do with this tremendous amount of data if it used it to analyse or even manipulate society as a whole.

With such an extensive array of demographic data, Google could easily draw some fairly accurate pictures or even predictions of society. This may have unforeseen implications. Analysis of such an immense cross section of the population, its habits, its fears, its preferences, its cultures, its desires, its beliefs, its strengths, its weaknesses, and much more, could reveal revelations.

We've already seen how Google Maps and Google Street putting information in the public view can create controversy. These are single services. Analysing the relationships of the data they have available, Google could use this information to reveal unfairness, incompetence, conflicts of interests, corruption, or worse, that could have devastating consequences for those involved, or even put those affiliated with Google at a political advantage.

With growing data about resources and their usage, combined with complex analysis tools and a whole world worth of technical data and reports, Google could easily obtain scientific advantages.

Worst of all, is their potential political influence. Even now, before their entry into the world of media, they could use their knowledge of current events and news to undermine government authority, manipulate opinions, actions, and even votes, making the political spin of the movie Wag the Dog seem like child's play.

Innovation

There was much discussion at the debate about Google's level of innovation, how it is intimidating because there is so little innovation elsewhere, and how they knocked Microsoft off their perch with this, rather than anti-competition legislation. Therefore it should not be down to laws to challenge Google's dominance, instead, it would be better to see be more newcomers to challenge it (and the laws are ineffective anyway).

I would have to agree with this. It's not outside the realms of possibility that another company can come along and take Google down a peg or two. Let's not forget Facebook, the number 2 site in the world, and the fact that in some areas of the world, Google is barely used.

Google should be praised for their innovation and not penalised for it with petty, jealous rules which stunt development and progress. Yet at the same time, we should be extremely wary of the power it can bestow on itself with such innovation.

It seems Google finds itself in this unique position because of innovation and ingenuity. The power is no longer with those who take it by irrelevant means of violence, guilt, social manipulation or political popularity. About time, too.

"The empires of the future are the empires of the mind" - Albert Einstein



Saturday, 20 March 2010

Machines to Run Society?

Sum of all Thrills Robot Arm by tom.arthur

Many people have an aversion to automated machines and computers running things.

There are likely two reasons for this. The first is due to the lack of trust we place in automation. This is mainly due to their track record. Machines have proved themselves to be unreliable in the past, and need to do a lot to regain our trust.

Secondly, machines have been known to make mistakes that are "machine" in nature, exposing human qualities that we took for granted. In other words, they will miss seemingly obvious details, or make mistakes relating to the human experience.

So it's not surprising that people don't really trust machines yet. They're not as good as humans in some areas yet they have superseded us in others. This also leads to discontent when humans see their jobs disappear as a result.

It's a shame that machines have developed such a bad reputation, because they don't really deserve it. Most of their faults have been inherited from humans. There are several ways that humanity has let down its automated creations.

The first is the implications of planned obsolescence. Although high investment in automation can be justified by the promise of reduced ongoing labour costs, robotics industries are subject to the same factors as any other companies when it comes to manufacture. The company's future survival depends on the robotics they build breaking down, requiring some kind of ongoing maintenance, losing desirability, or being improved upon in the future; they simply couldn't exist without this. Planned obsolescence causes corners to be cut, features withheld and the cheapest parts to be used. My own experience in manufacturing will testify - this is the way all consumer manufacturing works.

Then there are limitations caused by the education of the engineers and the diligence of the programmers. They are humans of course, and subject to human error, laziness, lack of enthusiasm, and many other flaws. These flaws can easily be passed on and manifest themselves in a variety of ways, including unreliable artificial intelligence and failing to take into account unforeseen factors or complications.

In addition, machines have been held to higher than realistic expectations ahead of schedule. By the time their capabilities become what they were supposed to be, their reputation has already been tarnished. This is a social issue - the public and the media should be better informed as to the true capabilities of technology and their expectations not contorted in the name of publicity.

Then machines also have a bad reputation because they cost people their jobs. This is frowned upon because at the current level of technology most of us simply must work to live. But we should not fear the machines taking all the work.

There will always be something for us to do. As technology provides us with easier access to our basic needs, therefore reducing our dependence on money, we will see a shift into more creative, scientific, technical, social, and utilitarian roles. The machines will do all the mundane, repetitive work, freeing us up to have far more enjoyable careers, careers where we use our human minds to do jobs that simple machines cannot do. Whether we retain this reliance on work to survive or not - the face of work is changing dramatically as a result of automation. We should be happy.

Can we trust machine intelligence to make the right choices to run society?

So would politics be one of the jobs that machines can do, or should that be left to humans?

Firstly ask yourself, do you think that humanity has ever done a good enough job? Our history is littered with war and corruption and power lust, and organised politics is often based on simply fighting over resources. The current political system is based on absolute beliefs, having more in common with dogmatic religion than evolving, constantly re-evaluating science. It is also all-encompassing in scope - you vote for a whole package, even if you don't agree with all the components in that package.

The essence of politics is fundamentally complex. Our decisions are based on personal perceptions and biases, subject to irrational abstract morality, and continually failing to address the nature of change.

Change, I believe most people familiar with the work of Kurzweil would agree, is the only thing we can be sure of. As a political example, capitalism helped to create a boom in society but now automation may obsolete the general concept of labour, virtual reality may move our needs away from money and towards energy, and nanotechnology could change the way we think about property, ownership, and scarcity.

Where does that leave the idealism of free enterprise?

This is why politics cannot be run on ideals. People's ideals will always change, even what could be considered the "correct" ideal will be changed by its environment, or other external influences.

So essentially, any sort of idealism is an outdated way of running society. At its very essence, it fails fundamentally in its purpose of deciding what is best for us because it does not take into account the nature of change. Additionally, idealism by its nature is based on a single perception. A perception is subject to personal bias, morals, and most importantly, static. It is not "moral" for our fates to be decided with "moral" bias because morality is relative.

The only sustainable system of power is a scientific one; a "decision engine" without the bias of "perception" - a system that is able to incorporate change. Would a technical, scientific method of deciding how society should function be more objective? Should there even be objectivity in politics?

Any politician will know, even the simplest decisions are highly complex when people are involved. People function on the highly "unscientific" system known as emotion. Emotions are the driving force behind decisions and perspectives, not rationality. Politicians currently exist because there is a need to relate to people in an emotional way - but why should emotion play a part in how society is run?

If a car breaks down, the mechanics don't take an emotion-fuelled vote on how to fix it. They look at the problem and figure out how to solve it in a technical way. People differ from cars of course, their needs are based on complex emotions.

But here is the fundamental question. Would it not, at some point, be theoretically possible to deal with emotions with the same scientific objectivity as we would deal with any other technical problem?

We really need to ask ourselves if emotions should play a part in decisions on how society is run, or whether politics really should be objective.

If politics is supposed to be objective, it should be subject to scientific method and technical processes. Decisions could be irrelevant, with AI working to find adequate solutions to a problem rather than making a choice based on which side of the debate it wants to align itself with.

Of course, relying on such an impersonal way of making decisions could prove inhumane, perhaps even dangerous. Yet the current system will never reach any kind of equilibrium as long as there is change.

There will be issues at first of course. Computers are far from perfect. But by addressing the reasons for this, a much better environment can be fostered where automation can be more reliable, both from a physical and a decision making point of view.

Only when we are building automation and decision making computers on this basis can we be well placed enough to judge the real ability and adequacy of machine based political systems.