Monday, 26 July 2010

Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Friday, 9 July 2010

No Going Back

"I've lost everything, my business, my property and to top it all off my lass of six years has gone off with someone else."

Raoul-Thomas-Moat-shoots-policeman-gunning-ex-lover-boyfriend

The concept of perpetual association, the "permanent record", causes despair in people's lives every day, although we don't hear about it unless they decide to make sure we hear about it.

How can we blame people for going psycho when a criminal record stands in the way of their entire future, giving them nothing left to live for?

Don't Follow the Rabbit

It's time to acknowledge and address the implications of Actuarial Escape Velocity in respect to crime and punishment. For, with infinite lifespans, ruining people's lives will not only have much greater significance, but it will not be in the interests of society. Who wants their infinite lifespan cut short by a crazy gunman?

There seems to be this incredibly misguided notion that all criminals are evil, they're born evil, and they will always be evil. Quite apart from the fact that most crimes are not dangerous or violent and exist only as a attempt to prevent further such "crimes", how can we say that people won't change for the better? Furthermore, how can we say that people who have never committed a crime will never do so?

Strangely enough, our current methods of locking people up with criminals to stop them being criminals isn't really working. Even taking the possibility of indefinite lifespans out of the equation, this insanity needs to be addressed. However, whatever the punishment, preventative measures, or rehabilitation methods, the question remains - how do we deal with the past when the future is infinite?

The problem is complex. Can someone who has committed cold blooded murder ever be trusted again? What if hundreds or even thousands of years have passed? Who knows what this trauma can do to a person, even in the extreme long term. Does it even matter, if they have been rehabilitated?

Then we throw another issue into the mix. Identity is something that even now is losing its meaning. When we can change our faces and official records, it's one thing. When we can change our entire bodies, it will be something else. When we can change our actual minds, our thoughts, memories, personalities, and emotions, then things will get considerably more complicated.

When life extension becomes a reality, we will have many questions to ask. One question that will become increasingly important is: "How important is the past?" We'll all have one, but can enough time turn us into someone else?

Tuesday, 6 July 2010

Reverend Meets His Maker (Series 1)

Video series I made based on a Reverend who dies and goes to Virtual Heaven.