Tim O’reilly, in his description of Web 2.0 said that
The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era appears to be this, that they have embraced the power of the web to harness collective intelligence
This insight defines the success of Web 2.0 companies as reliant on a new development in the software design world: the value of a system that increases as its users increase. The network effects realized for those types of systems provide a fair amount of value but what is important here is that the user becomes part of the application. In O’Reilly’s example, flickr can’t work without users tagging picture, Google can’t work without people creating web pages, etc… These participatory applications will have a large impact in the way we interface with systems and may represent the first breakthrough in terms of adding intelligence to our systems in a proper way.
As we all know, computers are not particularly smart. However, they are very good at processing large amounts of information. What has been missing, up to this point is the idea of what information to feed them and how to help them head in the right direction. With concepts surrounding web 2.0, these cues are no longer computer cues but they are human ones. No one programmed del.icio.us to figure out how to create a taxonomy of pages. However, every user of the system has helped create some level of taxonomy in the system.
Amazon’s Mechanical Turk system took the next logical step, which was to provide a software platform to automate such interactions. It is similar, in nature, to the way computing work was distributed on efforts like [email protected] but adding the human element to it. As such, as the boundaries between human interaction and software systems get softer, it is becoming increasingly difficult to tell how smart (or dumb) a system really is. People are part of the application but are applications part of the people too? When I google something I don’t know, do I enhance myself by discovering the new information and then storing it in my brain? Where is the line?
Virtual Worlds
Virtual Worlds, like SecondLife or World of Warcraft represent another logical step in this evolution: creating virtual economies out of thin air. as I write this, there are tens or even hundreds of thousands of people working on creating financial value within virtual communities. They may be selling things within games but, ever since those games started becoming more involved, people have been willing to pay real money to get virtual goods. Seeing such developments, some companies have set up worlds where actual trades are happening and are integrated with the rest of the financial world.
A couple of weeks ago, one of those games, Project Entropia, announced they would issue an ATM card to take virtual currencies into the real world.
Edward Castronova, the leading researcher on that subject, considers this “a blurring of the distinction between the game economy and the real one.”
As computing power continues to increase, this blurring is going to become more and more scary. At the current time, videogame platforms like the Xbox360 or the PS3 are presenting us with videogames that look close to reality. When those types of things start appearing in online community models, the lines will become so hazy that it will be difficult to tell what is the real world and what is a virtual one.
This is the fifth article in a 6 part series. You can read the following parts here: