Monday, 26 July 2010

Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Friday, 9 July 2010

No Going Back

"I've lost everything, my business, my property and to top it all off my lass of six years has gone off with someone else."

Raoul-Thomas-Moat-shoots-policeman-gunning-ex-lover-boyfriend

The concept of perpetual association, the "permanent record", causes despair in people's lives every day, although we don't hear about it unless they decide to make sure we hear about it.

How can we blame people for going psycho when a criminal record stands in the way of their entire future, giving them nothing left to live for?

It's time to acknowledge and address the implications of Actuarial Escape Velocity in respect to crime and punishment. For, with infinite lifespans, ruining people's lives will not only have much greater significance, but it will not be in the interests of society. Who wants their infinite lifespan cut short by a crazy gunman?

There seems to be this incredibly misguided notion that all criminals are evil, they're born evil, and they will always be evil. Quite apart from the fact that most crimes are not dangerous or violent and exist only as a attempt to prevent further such "crimes", how can we say that people won't change for the better? Furthermore, how can we say that people who have never committed a crime will never do so?

Strangely enough, our current methods of locking people up with criminals to stop them being criminals isn't really working. Even taking the possibility of indefinite lifespans out of the equation, this insanity needs to be addressed. However, whatever the punishment, preventative measures, or rehabilitation methods, the question remains - how do we deal with the past when the future is infinite?

The problem is complex. Can someone who has committed cold blooded murder ever be trusted again? What if hundreds or even thousands of years have passed? Who knows what this trauma can do to a person, even in the extreme long term. Does it even matter, if they have been rehabilitated?

Then we throw another issue into the mix. Identity is something that even now is losing its meaning. When we can change our faces and official records, it's one thing. When we can change our entire bodies, it will be something else. When we can change our actual minds, our thoughts, memories, personalities, and emotions, then things will get considerably more complicated.

When life extension becomes a reality, we will have many questions to ask. One question that will become increasingly important is: "How important is the past?" We'll all have one, but can enough time turn us into someone else?

Wednesday, 23 June 2010

Why Are Dreams So Strange?

The mind is curious thing. We believe the perceptions from our awakened state to be the ultimate reality - the be all and end all of our existence. The way things work in our day to day world, our interactions, our actions, their implications, and our understandings, appear to be the true representations of reality.

So when we sleep, why do we not question the random strangeness and non-realities that perpetuate our slumber? What causes dreams to be so bizarre compared to our awoken reality?

Dreams make no sense

It's not just a sensory thing. In dreams, even concepts are twisted and stretched and mixed together. I was once in a state of half-asleep dozing and had someone dictate what I was saying - and it was complete nonsense, quite hilarious with absolutely no reference to my experiences or thoughts of the day (or reality in general). Dreams seem to be a regurgitation of our minds but absent of any of the framework to hold it together. This made me wonder. Did humans dream before we became "intelligent"?

Research points to the fact that dreams come from implicit memories and the neo-cortex rather than declarative memories from the hippocampus. In other words, dreams are formed by abstract concepts rather than definitive memories of situations. This would explain why these concepts often don't fit in with the reality we are used to, yet they do have some grounding in this reality.

Consciousness within the dream world

With lucid dreaming, we can train our minds to recognise the strange differences between awake and asleep, and then initiate our consciousness while remaining in the strangeness of the dream. So this shows that consciousness is not necessarily a factor in the "falseness" of dreams, as many people can even control the reality of their dreaming state.

If consciousness can control our reality in our dreams, then how are our dreams any different from our awakened state? How do we know which reality (if any) is the right one?

It seems it is our perceptions, rather than our environments, that define our reality.

Thursday, 25 March 2010

Systems of Complexity



If we were to replace our bodies, one atom at a time, would we be the same person? One would think this would be the case. Every 10 years, every cell in our body will have been replaced at least once, with bone marrow taking the longest to renew. Most of the body renews every 7 years.

Our bodies are an ecosystem not unlike any other. Take the sea – remove and replace it one atom at a time and no fish will notice. Replacing larger pieces will cause problems for its inhabitants, but it will soon renew itself. Replace a large proportion, and this will likely have huge implications for the entire ocean. As it is with humans, replacing one small section at a time would be easily accounted for and would not have any dramatic effect on the system as a whole.

This is a dramatic realisation – for what are we if not our bodies?

We are not single entities. We are systems, and we are made from smaller systems, which in turn are made from smaller systems. Cells take in matter from our food and convert these molecules to be part of us, replacing extinguished cells. We are not the matter that constitutes our body – we are its collection of systems.

The brain, the place that for some reason is believed to house the “mind”, is almost certainly more than just a material structure. As yet we have failed to deconstruct it to any significant level, but we do know that its functionality relies to some extent on electrical configurations. However, the brain is not some simple electrical circuit which can be reverse engineered by simply following current paths and measuring voltages.

Brain operations are less logical, hiding their true functionality in the encoding of patterns. It’s these patterns that are more of an accurate reflection of who we are.

In fractals, an equation determines a configuration that is iterated. This type of pattern, known as self similarity, is an underlying mechanism of nature. Using fractal equations, we can now work out how many leaves and how much carbon dioxide a tree will create. It is the "DNA" of reality.

We are all connected

We should also remember that while we are made up of systems, we ourselves are composite parts of a larger system, the ecosystem of the universe. While we may not feel that we’re “connected” with the Earth or the Sun or the Andromeda Galaxy because we see no physical connection, we are connected in a scientific and logical way.

All atoms are surrounded by orbiting electrons which by definition are negatively charged, meaning that every atom in the universe repels every other atom – in other words – you never actually touch anything. So, your body is not even connected to itself, yet it is, albeit by magnetic forces. Therefore, we are all just as connected to the entire planet – and each other, as we are our own arms.

Through the vacuum of space, the magnetic forces continue but weakly, while gravity takes over to keep us connected to the rest of the universe. And every day we are learning more about how the universe is constructed, discovering phenomena such as dark matter that continue to reinforce our connected nature.

As well as our scientific connection, from a logical perspective, we are also as much a part of the universe as it is of us. Our actions affect the universe around us, and we enjoy the benefits or suffer the consequences of these actions accordingly. We rely on our surroundings to survive. The only thing holding these implications away from us is time. While we may not see the implications of our actions personally, they echo into the universe, which we are part of. Karma, in essence, is real.

Individuality

So we could end it here on “we are all one”, but if that were the case, why do we all have minds that “feel” like they are separate? Is it an evolutionary accident or is there some divine purpose to our individual consciousnesses?

Perhaps individuality is a deliberate outcome of evolution, a mechanism to bring about the most efficient thought system possible? There is no doubt that humans have the ability to take over from evolution now, increasing the “power” of our consciousness, our life spans, and the efficiency of our resource usage to drive our own destinies.

Following this thought-train, we could provoke more questions than answers. Is consciousness determined by individuality? Could there be alien species that evolved without any concept of individuality? This would depend on what would be the best evolutionary advantage.

The big question, is could this individuality be an illusion, created in our own minds? We are, after all, not one entity, but a collection of systems and a system within a larger system.

This begs the question, is a brain the pre-requisite for consciousness, or could consciousness evolve from any collection of systems complex enough, for example, artificial software, a complex cell, or even a star? We are, after all, just different versions of the same kind of fractal patterns that make up all of nature.

What if we are just the dreams of stars?

Further reading

http://library.thinkquest.org/26242/full/ap/ap15.html
http://en.wikipedia.org/wiki/Fractal
http://commons.wikimedia.org/wiki/File:Messier51_sRGB.jpg




Monday, 22 March 2010

Is Google too Big? Size isn't important, it's what you do with it that counts

There's no doubt that Google is the "Ford" of the day, pioneering a new industry which is changing our lives on a fundamental level. With this in mind, it was only a matter of time before this monopolistic driving of our destinies was called to question.

I recently attended a debate held by Spiked which asked the question "Has Google got too big?"

As a debate, it was relatively tame, given that no one person was strongly on the side of either "yes" or "no". However, this was due mainly to the complexity of the question, so as a discussion, it became rather in depth.

Size Doesn't Matter

Proponents of Google tried to void the argument, pointing out that the use of the adjective "big" was irrelevant, and that size had no implications, and that we should be asking ourselves whether they are "good" or "evil". While this is true, there's no doubt that Google's size is intimately connected to its "morality".

Legally in the European Union, competition laws state that a company should "compete on its merits". Now, law is pretty complex, but from my understanding, this should technically forbid a company using its own assets to put itself at an advantage; as a Google example, manipulating search results to put itself higher. If anything, this would be far too obvious and one would expect Google to use much more sophisticated and subtle techniques to manipulate its advantage, if it wanted to.

With Google having its fingers in so many online pies, it has a kind of power that is not directly proportional to its size. The possibilities presented by the data it collects combined with the sway it has in the public arena are largely unexplored. Really, its size is irrelevant. It could fire all but a few of its employees tomorrow and still be capable of some truly world-changing actions.

Knowledge is Power

Many people are concerned with Google's supposed disregard for privacy, but while this is a concern, there is more to worry about here. Rather than how it might customise the experience of individuals, we should be thinking about what it could do with this tremendous amount of data if it used it to analyse or even manipulate society as a whole.

With such an extensive array of demographic data, Google could easily draw some fairly accurate pictures or even predictions of society. This may have unforeseen implications. Analysis of such an immense cross section of the population, its habits, its fears, its preferences, its cultures, its desires, its beliefs, its strengths, its weaknesses, and much more, could reveal revelations.

We've already seen how Google Maps and Google Street putting information in the public view can create controversy. These are single services. Analysing the relationships of the data they have available, Google could use this information to reveal unfairness, incompetence, conflicts of interests, corruption, or worse, that could have devastating consequences for those involved, or even put those affiliated with Google at a political advantage.

With growing data about resources and their usage, combined with complex analysis tools and a whole world worth of technical data and reports, Google could easily obtain scientific advantages.

Worst of all, is their potential political influence. Even now, before their entry into the world of media, they could use their knowledge of current events and news to undermine government authority, manipulate opinions, actions, and even votes, making the political spin of the movie Wag the Dog seem like child's play.

Innovation

There was much discussion at the debate about Google's level of innovation, how it is intimidating because there is so little innovation elsewhere, and how they knocked Microsoft off their perch with this, rather than anti-competition legislation. Therefore it should not be down to laws to challenge Google's dominance, instead, it would be better to see be more newcomers to challenge it (and the laws are ineffective anyway).

I would have to agree with this. It's not outside the realms of possibility that another company can come along and take Google down a peg or two. Let's not forget Facebook, the number 2 site in the world, and the fact that in some areas of the world, Google is barely used.

Google should be praised for their innovation and not penalised for it with petty, jealous rules which stunt development and progress. Yet at the same time, we should be extremely wary of the power it can bestow on itself with such innovation.

It seems Google finds itself in this unique position because of innovation and ingenuity. The power is no longer with those who take it by irrelevant means of violence, guilt, social manipulation or political popularity. About time, too.

"The empires of the future are the empires of the mind" - Albert Einstein