Tuesday, 11 January 2011

Product Longevity in a World Driven by Consumption

It should be obvious that Product Longevity is incompatible with capitalism as we know it. Our system relies on continuous consumption to perpetuate the workforce, grow enterprise, and maintain profits. While there may be a capitalist incentive to produce long lasting products in some industries, the fact remains that breaking down just outside of the warranty period is the most profitable circumstance.

Constant technological advancements seem to be a licence for excessive consumption, ongoing changes justifying the buy-and-throw-away culture. Things, in general, are not designed to be upgraded, they are designed to be superseded and replaced.

How do we address this from a sustainability perspective?

It’s becoming increasingly apparent that the decoupling of monetary gain from production is imperative.

Would it be possible (profitable) for a company to start up, complete a production run of one very long lasting product, and then move onto another, different product? Maybe, but only if the company’s infrastructure was designed in such a way as to allow for cheap and fast transformation to a new product line. There may still be difficulties supplying genuinely consumable products, and fast advancing technological products, and dealing with any products that break down.

Fundamentally, any sustainable production model would never be preferable to any company whose priority is to grow and make profit. However, it might be demanded as more people realise the importance of sustainability.

We must think about how to produce goods that integrate product longevity while also allowing for ongoing technological enhancement, and effectively dealing with product failures.

It might then be in the interests of a sustainable community to form their own production facilities not concerned with profit, similar to a cooperative but with a focus on sustainability over profit. Working outside the monetary system, this would undermine any companies working within it, out-competing them.

This may allow a community enterprise to run indefinitely, albeit without growth.

Such an enterprise could adopt sustainable production methods such as modular design. An example of such a project is PhoneBloks, who propose a mobile phone design where the base of the phone is produced and an array of components can then be added or removed,  personalizing the device and allowing for ease of replacing damaged parts. Laptops, tablets, or any handheld device could make use of this platform, such as washing machines, fridges, gardening tools, or even cars.

Other goals of such an endeavour would include; reducing product duplication, reducing waste, building more robust products, and incorporating more reusable components into every design.

This model may also allow for greater input during the design process. The internet can allow for a more collaborative approach to design as well as production. This is already happening, it’s only a matter of time before the designs are good enough that these products take off. Then, the concepts of open, sustainable, modular, and, most significantly, profitless design enter the mainstream.

How will profit-driven corporations respond?

Tuesday, 14 September 2010

How Designer Babies Highlight Society's Immaturity

The question of designer babies is usually met with disdain. You don't even have to be religious to object to the idea of customising a human before it's born. Indeed, this concept doesn't just "go against nature", it makes us question what it means to be human.

The possibility of customising an embryo with the view to having an "enhanced" child opens up a veritable test tube of questions. What are the implications of being able to set a child's intelligence, their strengths, their abilities?

Then there is the questions that really hit a nerve: "Would people chose not to have a black baby when they know it will be subject to persecution and prejudice?" The whole issue is surrounded by frightening dilemmas.

The problem is, it's already here. We currently screen embryos for birth defects such as spina bifida, and many would argue that prevention or removal or deficiencies is a form of enhancement.

Of course, we can try to separate prevention of negative from implementation of positive. Then maybe the fascists - I mean conservatives - among us could make a law preventing any form of positive enhancement - but would that be ethical? If we have the potential to allow someone to have a 500 year lifespan - is it right to withhold that from them before they're born and can make a choice about it?

Dr Robert Sparrow makes the profound observation that a child can never reprimand its parents for not enhancing them - because if they had, it would have meant choosing a different embryo and the child wouldn't have been born anyway. However, this of course is only true for embryo screening, so is a bit of a short term argument and, in my opinion, a moot point.

Laws are like band-aids on cancer

I frequently point out on this blog and elsewhere that laws and restrictions are not solutions under any circumstances, and this is especially true when it comes to technology and its ability to undermine and disrupt our paradigms. Attempts to control by prohibition are primitive, ineffective, often un-ethical, they have unforeseen and unrelated side effects, and are usually done for the wrong reasons. This issue of designer babies and human enhancement needs far more thought than that which can be provided by narrow-minded rule-setting  waste-of-space bureaucrats.

We went past the point long ago when lawmakers were able to anticipate and knowledgeably counteract dangers arising from technological developments. Technology is enabling these society-altering options not only at a pace that can't be kept up with, but that can't really be understood. They change our paradigms yet we attempt to create rules based on the old ones. Just look at the feeble attempts to control the Internet as a prime example.

If we get this right, we could have a society of healthy, intelligent, long living (and therefore possibly wise) super humans. With this being the potential, how can we ever hope to keep it at bay forever?

Turning what we know on its head

When significant pre-birth human enhancement does arrive, there are still many ethical issues and implications we're going to face, and we need to be thinking about them now. For example, it probably won't take us long to acquire a disdain for anything "less than perfect". While some definitions of perfection will obviously vary, some won't - a longer lifespan and a higher intelligence will be desirable to most people - even if they chose not to use them.

Will we see a separation of the "enhanced" and "non-enhanced" - as if we don't have enough excuses to hate each other - or will the "none-enhanced" simply be subjected to the peer pressure similar to that of mobile phone ownership? Either way, such enhancements would need to be affordable to the masses. Otherwise, we have another issue:

A Right or a Privilege?

What effect will economics have on designer children? Especially in countries with no socialised health care - it's likely that some of the enhancements will be the sole benefit of those with money, perhaps further exacerbating the wealth gap. If it's morally imperative not to withhold enhancement - how does this fit in with the monetary system?

Isn't being born healthy everyone's birthright in a civilised society? Or does that depend on the financial cost? (How exactly do we set the definition of healthy?) If it's not economically viable to give all desired enhancement to everyone - we will almost certainly end up with humans of varying levels of enhancement.

This will be significant, because among other things, it will affect dynamic of the workforce. Those without enhancement because they started off poor would only be able to get the lower paid jobs (if any at all) because of their "disability".

In the meantime, those with enhancement will have certain advantages. Suppose we breed one person who is more intelligent and charismatic than anyone on the planet - and they ran for president? Firstly, this intelligence could give them an unfair(?) advantage over all other human beings, but secondly, why shouldn't they be in charge, if they're likely to do a better job than anyone else?

A Real Game Changer

I could probably expand on these ethical dilemmas all day. But the common denominator is that our current systems, our current ways of thinking, aren't really compatible with our expanding options. Just as nanotechnology might undermine scarcity, and virtual reality might undermine our entire physical reality, "designer babies" open up our world to a host of new possibilities - many of which we are just not set up for. These possibilities will force us to question our deep rooted beliefs and turn our society upside down.

Because of the effect on the individual, it's likely that this will be the tipping point - the point where our advances shift the balance of power from politics to technology.


Image by 5election.com

Wednesday, 8 September 2010

Could Artificial Intelligence Development Hit a Dead End?

Kurzweil and his proponents seem to be unshakable in their belief that at some point, Advanced Artificial General Intelligence, Machine Sentience, or Human Built Consciousness, whatever you would like to call it, will happen. Much of this belief comes from the opinion that consciousness is an engineering problem, and that it will, at some point, regardless of its complexity, be developed.

In this post, I don't really want to discuss whether or not consciousness can be understood, this is something for another time. What we need to be aware of is the possibility of our endeavours to create Artificial Intelligence stalling.

Whatever happened to...Unified Field Theory?

It seems sometimes, the more we learn about something, the more cans of worms we open, and the harder the subject becomes. Sometimes factors present themselves that we would not have expected to be relevant to our understanding.

Despite nearly a century of research and theorizing, UFT remains an open line of research. There are other scientific theories that we have failed to completely understand, some that have gone on for so long that people are even losing faith in them, and are no longer pursuing them.

Whatever happened to...The Space Race?

Some problems are just so expensive that they are beyond our reach. While this is unlikely to be true forever, it could have a serious and insurmountable effect on Artificial Intelligence development. Exponentially increasing computer power and other technology should stop this being a problem for too long, but who knows what financial, computing, and human resource demands we will find ourselves facing as AI development continues.

Whatever happened to...Nuclear Power?

Some ideas just lose social credibility, and are then no longer pursued. If we are able to create an AI that is limited in some way and displays a level of danger that we would not be able to cope with if the limitations were removed, it's most likely that development will have to be stopped, either by government intervention or simply social pressure.


I think it's unlikely that the progress of anything can be stopped indefinitely. It requires definite failure by an infinite number of civilisations. Anyone familiar with the Fermi Paradox and the "All species are destined to wipe themselves out" theory will have a good understanding of this concept. 100% failure is just not statistically possible indefinitely when it depends on a certain action not being performed.

However, it is certainly likely that our progress will be stumped at some point. Even with the accelerating nature of technology, this could cause an untold level of stagnation.

We should try and stay positive of course, but it would be naive to ignore the chance that, for some time at least, we might fail.


I'm currently attending the Singularity Summit AU in Melbourne. There were a couple of talks on Tuesday night and there will be a whole weekend of fun starting on Friday night. :) Therefore you can expect a few posts to be inspired from my conversations with other future-minded thinkers over the coming days!

image by rachywhoo

Monday, 26 July 2010

Can We Restrain AI?

One of the main challenges in creating a greater-than-human Artificial Intelligence is ensuring that it's not evil. When we "turn it on", we don't want it to wipe out us out or enslave us. Ideally, we want it to be nice.

The problem is how we can guarantee this.

Trap it

Some have suggested limiting the Artificial Intelligence by "trapping it" in a virtual world, where it could do no damage outside the confines of this environment. While this might be a safe solution it could limit the AI to only function within the limits and reality of the virtual world. Ok, so we might be able to program a perfectly realistic and isolated virtual world, but would this happen? Is there a parallel project to create such a "virtual prison" alongside AI research? And what if AI was to evolve or emerge from existing systems (such as the internet or a selection of systems within it) before we could develop such a prison?

Then of course there is the possibility of it escaping. Certainly if it exceeded our own intelligence it might be able to figure out a way out of its "box".

It's worth noting at this point Nick Bostrom's speculation that *we* may be living in such an environment. This raises the question: What if the only way to develop greater-than human intelligence was to create an entire society of AIs, which develop their intelligence only as part of this society? This would significantly increase the processing power required for such a virtual prison.

Still, as we will see, trapping an AI is perhaps the best solution for restraining it for our protection.

Give it Empathy

Many argue that the answer is simple: Just ensure that the AI has empathy. Not only is this idea fundamentally flawed in many ways, I don't believe that it is anywhere near simple.

The idea is that by allowing an AI to be aware of its own mortality, it could then understand the pain it could cause to others and be more caring and considerate to our needs. So just like humans, it would be caring because it could understand how people felt... You see the first problem there?

Humans are products of their environments, shaped by their experiences. They develop empathy but empathy is complex and can be overridden by other factors. We are complex creatures, influenced by emotions, experiences, our body chemistry, our environment and countless other things. One would assume that for an AI to be "intelligent", it would be just as complex.

Even if an AI had an unbreakable measure of empathy for us, this would not guarantee our safety. An AI could decide that it is in our best interests to suffer an extraordinary amount of pain, for example as a learning experience. What if it decided to be extremely "cruel to be kind?"

It's unlikely empathy would be enough to protect us, because empathy still depends on the AI making the right decisions. Humans make bad decisions all the time. Often we even make good decisions that have bad consequences.

Suppose we want to save a bus full of children, but we had to run over a deer to do so. Most people would choose to save the children. To an AI with a bus load of other AIs, we could be the deer. It might be upset about hitting us, but it would have been for a "greater good".

This brings us to the next possibility.

Give it Ethics

The problem with ethics is that there are no right or wrong answers. We all develop our own very personalised sense of ethics, which can easily be incompatible with someone else's. An AI's own ethics could certainly become incompatible with our interests. One likely scenario would be where it saw itself as more creative with the ability to create more value than humans, and therefore worth more than us.

Then we need to consider what kind of ethics an AI could be created with. Would it be that decided by its creator? If one was to "program in" a certain set of ethics, would an AI keep these, or evolve away from them, developing its own ethics based on its own experiences and integrations? This demonstrates the main problem with trying to program limitations into an AI. If it could break its own programming, how could we guarantee our safety? If it could not, could it really be classed as "intelligence"?

This makes one wonder if we have been "programmed" with any limitations to protect our "creator", should one exist...

Artificially Real

It seems that much of the focus in developing AI is introspective, focusing on the inner workings of thought and intelligence. However, the effects of environment, experiences, social interaction, the passage of time, emotion, physical feelings and personal form are all fundamental factors in our own development. It's very possible that these factors are in fact essential for the development of intelligence in the first place. If this is the case, any degree of limitation would be undermined by external influences. How can we balance restraint while still achieving 'real' intelligence?

One thing is for certain - we need to fully understand child development and the influence of external factors if we are to understand intelligence enough to re-create it. Only then can we know if any kind of limitation is possible.

Friday, 9 July 2010

No Going Back

"I've lost everything, my business, my property and to top it all off my lass of six years has gone off with someone else."


The concept of perpetual association, the "permanent record", causes despair in people's lives every day, although we don't hear about it unless they decide to make sure we hear about it.

How can we blame people for going psycho when a criminal record stands in the way of their entire future, giving them nothing left to live for?

It's time to acknowledge and address the implications of Actuarial Escape Velocity in respect to crime and punishment. For, with infinite lifespans, ruining people's lives will not only have much greater significance, but it will not be in the interests of society. Who wants their infinite lifespan cut short by a crazy gunman?

There seems to be this incredibly misguided notion that all criminals are evil, they're born evil, and they will always be evil. Quite apart from the fact that most crimes are not dangerous or violent and exist only as a attempt to prevent further such "crimes", how can we say that people won't change for the better? Furthermore, how can we say that people who have never committed a crime will never do so?

Strangely enough, our current methods of locking people up with criminals to stop them being criminals isn't really working. Even taking the possibility of indefinite lifespans out of the equation, this insanity needs to be addressed. However, whatever the punishment, preventative measures, or rehabilitation methods, the question remains - how do we deal with the past when the future is infinite?

The problem is complex. Can someone who has committed cold blooded murder ever be trusted again? What if hundreds or even thousands of years have passed? Who knows what this trauma can do to a person, even in the extreme long term. Does it even matter, if they have been rehabilitated?

Then we throw another issue into the mix. Identity is something that even now is losing its meaning. When we can change our faces and official records, it's one thing. When we can change our entire bodies, it will be something else. When we can change our actual minds, our thoughts, memories, personalities, and emotions, then things will get considerably more complicated.

When life extension becomes a reality, we will have many questions to ask. One question that will become increasingly important is: "How important is the past?" We'll all have one, but can enough time turn us into someone else?