The research community is beginning to understand that motivations are not a human “artifact” of consciousness, but part of the essential glue that binds consciousness together. Without motivations we have nothing that holds us to this vessel, ensuring that we continue to eat, pay our rent, and do other things necessary for our survival.

Conscious machines will for this reason have motivations as well. Otherwise they simply just wouldn’t function. This is an important point because talk of the singularity often brings up visions of a single integrated “machine” that will inevitably enslave humanity. A better question is:

“Will AI be used to gain immense advantage for a single party (whether that party is the AI itself or the human that controls it), or will AI be used to maximize benefit for us all?”

Even if the AIs have interfaces that allow them to share information more rapidly than humans can through reading or watching media, separate AIs will have separate motivations from a single centralized AI. Given that a signature of consciousness is motivation, any consciousness will obviously be motivated to secure all the resources it needs to ensure its survival. In some cases, the most efficient way to secure resources is sharing. In other cases, it’s through competition. AIs might share resources, but they might also compete.

When and if an artificial consciousness is created, there’ll almost certainly be multiple instances of it. Because a consciousness cannot exist without motivation, and because the motivation of each consciousness differs, requiring what might be great effort to get on the same page, it may very well be true that multiple consciousness’s cannot “merge” in a way that would become truly threatening to humans unless one subsumes all others. Anything else would merely be a co-location of minds with different objectives, negotiating a sharing of resources.

One AI with far fewer resources than another would in fact probably fear the far more powerful AI might just erase it and take over its resources. Think of your “several generations out of date” home computer trying to hold its own against Big Blue. Rather than us humans needing to fear AI, an AI might more likely need to be afraid of humans not protecting it against other AIs.

Centralization rather than technological advance is the real danger for ANY conscious entity. Yet when you consider the competitive advantage technology gives, the near infinite rate of change of the technology singularity introduces the possibility of a future in which the technology arms race concentrates power and resources to a degree never seen before. Could it put a few into positions of unimaginable power that may not ever be unseated? If so, there will be nothing stopping those few from becoming unimaginable despots to whom the rest of humanity are merely disposable commodities whose suffering means nothing.

Think of what you would do if you had infinite power over everyone and there were no consequences for your actions. Think of what would happen if you needed a kidney and that child over there had one that would fit just fine. Think of what would happen if some man with unimaginable power wanted that woman, or the next, or the next thousand. Think of what would happen if you wanted to buy something and you could just flip a switch and empty out the world’s bank accounts, then watch with casual detachment as millions fight like animals for food and water. Think of what would happen if that one man in control just happened to wake up one morning to the conclusion that there were several billion people on the earth too many.

The technological singularity, if it exists, is a kind of Armageddon.

In my upcoming book “The Technology Gravity Well” I delve into these and other issues, including how a new breed of massively collaborative software could usher in the singularity in the next 5 years. This may be one of the most important books you come across this year. Read more here:

http://igg.me/at/technology-gravity-well

Andy Williams

Andy E. Williams is Executive Director of the Nobeah Foundation, a not-for-profit organization focusing on raising funds to distribute technology with the potential for transformative social impact. Andy has an undergraduate degree in physics from the University of Toronto. His graduate studies centered on quantum effects in nano-devices.

 

 

Source: singularityweblog.com

About The Author

ENVIENTA is a next generation sharing economy model, a DIY maker movement and community-based solution package for the 21st century, which provides share of know-how, blueprints, products and resources for the members. It's a peer-to-peer open source platform cooperative, a sustainable, cost-effective, decentralized and laterally scaled socioeconomic framework. ENVIENTA is a community controlled initiative with full transparency. As part of our purpose we're operating as a registered non-profit association.

Related Posts

Leave a Reply