Could Artificial Intelligence Development Hit a Dead End?
Kurzweil and his proponents seem to be unshakable in their belief that at some point, Advanced Artificial General Intelligence, Machine Sentience, or Human Built Consciousness, whatever you would like to call it, will happen. Much of this belief comes from the opinion that consciousness is an engineering problem, and that it will, at some point, regardless of its complexity, be developed.
In this post, I don't really want to discuss whether or not consciousness can be understood, this is something for another time. What we need to be aware of is the possibility of our endeavours to create Artificial Intelligence stalling.
Whatever happened to...Unified Field Theory?
It seems sometimes, the more we learn about something, the more cans of worms we open, and the harder the subject becomes. Sometimes factors present themselves that we would not have expected to be relevant to our understanding.
Despite nearly a century of research and theorizing, UFT remains an open line of research. There are other scientific theories that we have failed to completely understand, some that have gone on for so long that people are even losing faith in them, and are no longer pursuing them.
Whatever happened to...The Space Race?
Some problems are just so expensive that they are beyond our reach. While this is unlikely to be true forever, it could have a serious and insurmountable effect on Artificial Intelligence development. Exponentially increasing computer power and other technology should stop this being a problem for too long, but who knows what financial, computing, and human resource demands we will find ourselves facing as AI development continues.
Whatever happened to...Nuclear Power?
Some ideas just lose social credibility, and are then no longer pursued. If we are able to create an AI that is limited in some way and displays a level of danger that we would not be able to cope with if the limitations were removed, it's most likely that development will have to be stopped, either by government intervention or simply social pressure.
*
I think it's unlikely that the progress of anything can be stopped indefinitely. It requires definite failure by an infinite number of civilisations. Anyone familiar with the Fermi Paradox and the "All species are destined to wipe themselves out" theory will have a good understanding of this concept. 100% failure is just not statistically possible indefinitely when it depends on a certain action not being performed.
However, it is certainly likely that our progress will be stumped at some point. Even with the accelerating nature of technology, this could cause an untold level of stagnation.
We should try and stay positive of course, but it would be naive to ignore the chance that, for some time at least, we might fail.
*
I'm currently attending the Singularity Summit AU in Melbourne. There were a couple of talks on Tuesday night and there will be a whole weekend of fun starting on Friday night. :) Therefore you can expect a few posts to be inspired from my conversations with other future-minded thinkers over the coming days!
image by rachywhoo
Comments