Wednesday, 8 September 2010

Could Artificial Intelligence Development Hit a Dead End?



Kurzweil and his proponents seem to be unshakable in their belief that at some point, Advanced Artificial General Intelligence, Machine Sentience, or Human Built Consciousness, whatever you would like to call it, will happen. Much of this belief comes from the opinion that consciousness is an engineering problem, and that it will, at some point, regardless of its complexity, be developed.

In this post, I don't really want to discuss whether or not consciousness can be understood, this is something for another time. What we need to be aware of is the possibility of our endeavours to create Artificial Intelligence stalling.

Whatever happened to...Unified Field Theory?

It seems sometimes, the more we learn about something, the more cans of worms we open, and the harder the subject becomes. Sometimes factors present themselves that we would not have expected to be relevant to our understanding.

Despite nearly a century of research and theorizing, UFT remains an open line of research. There are other scientific theories that we have failed to completely understand, some that have gone on for so long that people are even losing faith in them, and are no longer pursuing them.

Whatever happened to...The Space Race?

Some problems are just so expensive that they are beyond our reach. While this is unlikely to be true forever, it could have a serious and insurmountable effect on Artificial Intelligence development. Exponentially increasing computer power and other technology should stop this being a problem for too long, but who knows what financial, computing, and human resource demands we will find ourselves facing as AI development continues.

Whatever happened to...Nuclear Power?

Some ideas just lose social credibility, and are then no longer pursued. If we are able to create an AI that is limited in some way and displays a level of danger that we would not be able to cope with if the limitations were removed, it's most likely that development will have to be stopped, either by government intervention or simply social pressure.

*

I think it's unlikely that the progress of anything can be stopped indefinitely. It requires definite failure by an infinite number of civilisations. Anyone familiar with the Fermi Paradox and the "All species are destined to wipe themselves out" theory will have a good understanding of this concept. 100% failure is just not statistically possible indefinitely when it depends on a certain action not being performed.

However, it is certainly likely that our progress will be stumped at some point. Even with the accelerating nature of technology, this could cause an untold level of stagnation.

We should try and stay positive of course, but it would be naive to ignore the chance that, for some time at least, we might fail.

*

I'm currently attending the Singularity Summit AU in Melbourne. There were a couple of talks on Tuesday night and there will be a whole weekend of fun starting on Friday night. :) Therefore you can expect a few posts to be inspired from my conversations with other future-minded thinkers over the coming days!


image by rachywhoo


Monday, 26 July 2010

Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Friday, 9 July 2010

No Going Back

"I've lost everything, my business, my property and to top it all off my lass of six years has gone off with someone else."

Raoul-Thomas-Moat-shoots-policeman-gunning-ex-lover-boyfriend

The concept of perpetual association, the "permanent record", causes despair in people's lives every day, although we don't hear about it unless they decide to make sure we hear about it.

How can we blame people for going psycho when a criminal record stands in the way of their entire future, giving them nothing left to live for?

It's time to acknowledge and address the implications of Actuarial Escape Velocity in respect to crime and punishment. For, with infinite lifespans, ruining people's lives will not only have much greater significance, but it will not be in the interests of society. Who wants their infinite lifespan cut short by a crazy gunman?

There seems to be this incredibly misguided notion that all criminals are evil, they're born evil, and they will always be evil. Quite apart from the fact that most crimes are not dangerous or violent and exist only as a attempt to prevent further such "crimes", how can we say that people won't change for the better? Furthermore, how can we say that people who have never committed a crime will never do so?

Strangely enough, our current methods of locking people up with criminals to stop them being criminals isn't really working. Even taking the possibility of indefinite lifespans out of the equation, this insanity needs to be addressed. However, whatever the punishment, preventative measures, or rehabilitation methods, the question remains - how do we deal with the past when the future is infinite?

The problem is complex. Can someone who has committed cold blooded murder ever be trusted again? What if hundreds or even thousands of years have passed? Who knows what this trauma can do to a person, even in the extreme long term. Does it even matter, if they have been rehabilitated?

Then we throw another issue into the mix. Identity is something that even now is losing its meaning. When we can change our faces and official records, it's one thing. When we can change our entire bodies, it will be something else. When we can change our actual minds, our thoughts, memories, personalities, and emotions, then things will get considerably more complicated.

When life extension becomes a reality, we will have many questions to ask. One question that will become increasingly important is: "How important is the past?" We'll all have one, but can enough time turn us into someone else?

Wednesday, 23 June 2010

Why Are Dreams So Strange?

The mind is curious thing. We believe the perceptions from our awakened state to be the ultimate reality - the be all and end all of our existence. The way things work in our day to day world, our interactions, our actions, their implications, and our understandings, appear to be the true representations of reality.

So when we sleep, why do we not question the random strangeness and non-realities that perpetuate our slumber? What causes dreams to be so bizarre compared to our awoken reality?

Dreams make no sense

It's not just a sensory thing. In dreams, even concepts are twisted and stretched and mixed together. I was once in a state of half-asleep dozing and had someone dictate what I was saying - and it was complete nonsense, quite hilarious with absolutely no reference to my experiences or thoughts of the day (or reality in general). Dreams seem to be a regurgitation of our minds but absent of any of the framework to hold it together. This made me wonder. Did humans dream before we became "intelligent"?

Research points to the fact that dreams come from implicit memories and the neo-cortex rather than declarative memories from the hippocampus. In other words, dreams are formed by abstract concepts rather than definitive memories of situations. This would explain why these concepts often don't fit in with the reality we are used to, yet they do have some grounding in this reality.

Consciousness within the dream world

With lucid dreaming, we can train our minds to recognise the strange differences between awake and asleep, and then initiate our consciousness while remaining in the strangeness of the dream. So this shows that consciousness is not necessarily a factor in the "falseness" of dreams, as many people can even control the reality of their dreaming state.

If consciousness can control our reality in our dreams, then how are our dreams any different from our awakened state? How do we know which reality (if any) is the right one?

It seems it is our perceptions, rather than our environments, that define our reality.

Thursday, 25 March 2010

Systems of Complexity



If we were to replace our bodies, one atom at a time, would we be the same person? One would think this would be the case. Every 10 years, every cell in our body will have been replaced at least once, with bone marrow taking the longest to renew. Most of the body renews every 7 years.

Our bodies are an ecosystem not unlike any other. Take the sea – remove and replace it one atom at a time and no fish will notice. Replacing larger pieces will cause problems for its inhabitants, but it will soon renew itself. Replace a large proportion, and this will likely have huge implications for the entire ocean. As it is with humans, replacing one small section at a time would be easily accounted for and would not have any dramatic effect on the system as a whole.

This is a dramatic realisation – for what are we if not our bodies?

We are not single entities. We are systems, and we are made from smaller systems, which in turn are made from smaller systems. Cells take in matter from our food and convert these molecules to be part of us, replacing extinguished cells. We are not the matter that constitutes our body – we are its collection of systems.

The brain, the place that for some reason is believed to house the “mind”, is almost certainly more than just a material structure. As yet we have failed to deconstruct it to any significant level, but we do know that its functionality relies to some extent on electrical configurations. However, the brain is not some simple electrical circuit which can be reverse engineered by simply following current paths and measuring voltages.

Brain operations are less logical, hiding their true functionality in the encoding of patterns. It’s these patterns that are more of an accurate reflection of who we are.

In fractals, an equation determines a configuration that is iterated. This type of pattern, known as self similarity, is an underlying mechanism of nature. Using fractal equations, we can now work out how many leaves and how much carbon dioxide a tree will create. It is the "DNA" of reality.

We are all connected

We should also remember that while we are made up of systems, we ourselves are composite parts of a larger system, the ecosystem of the universe. While we may not feel that we’re “connected” with the Earth or the Sun or the Andromeda Galaxy because we see no physical connection, we are connected in a scientific and logical way.

All atoms are surrounded by orbiting electrons which by definition are negatively charged, meaning that every atom in the universe repels every other atom – in other words – you never actually touch anything. So, your body is not even connected to itself, yet it is, albeit by magnetic forces. Therefore, we are all just as connected to the entire planet – and each other, as we are our own arms.

Through the vacuum of space, the magnetic forces continue but weakly, while gravity takes over to keep us connected to the rest of the universe. And every day we are learning more about how the universe is constructed, discovering phenomena such as dark matter that continue to reinforce our connected nature.

As well as our scientific connection, from a logical perspective, we are also as much a part of the universe as it is of us. Our actions affect the universe around us, and we enjoy the benefits or suffer the consequences of these actions accordingly. We rely on our surroundings to survive. The only thing holding these implications away from us is time. While we may not see the implications of our actions personally, they echo into the universe, which we are part of. Karma, in essence, is real.

Individuality

So we could end it here on “we are all one”, but if that were the case, why do we all have minds that “feel” like they are separate? Is it an evolutionary accident or is there some divine purpose to our individual consciousnesses?

Perhaps individuality is a deliberate outcome of evolution, a mechanism to bring about the most efficient thought system possible? There is no doubt that humans have the ability to take over from evolution now, increasing the “power” of our consciousness, our life spans, and the efficiency of our resource usage to drive our own destinies.

Following this thought-train, we could provoke more questions than answers. Is consciousness determined by individuality? Could there be alien species that evolved without any concept of individuality? This would depend on what would be the best evolutionary advantage.

The big question, is could this individuality be an illusion, created in our own minds? We are, after all, not one entity, but a collection of systems and a system within a larger system.

This begs the question, is a brain the pre-requisite for consciousness, or could consciousness evolve from any collection of systems complex enough, for example, artificial software, a complex cell, or even a star? We are, after all, just different versions of the same kind of fractal patterns that make up all of nature.

What if we are just the dreams of stars?

Further reading

http://library.thinkquest.org/26242/full/ap/ap15.html
http://en.wikipedia.org/wiki/Fractal
http://commons.wikimedia.org/wiki/File:Messier51_sRGB.jpg