Friday, February 10, 2012

Web 3.0? No, it's Web 2.0 S


Tim O’Reilly (2005) presented a thorough explanation of the benefits of Web 2.0 in his report, What is Web 2.0, and opened the door for predictions about Web 3.0. If Web 1.0 was focused on posting content, Web 2.0 is collaborating to build content, what will Web 3.0 be?
According to the opinions of experts, Web 3.0 will feature information compiled and sorted to personalize content and develop databases that could not have been possible in a Web 1.0 world. 

O’Reilly predicted that, “the race is on to own certain classes of core data: location, identity…the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service,” (p. 3). Facebook, Google+ and now Rogers are all vying to become the one-stop shop for user’s information.  Rogers Communication’s new service, One Number, offers customers the ability to combine all their contact information and inboxes into one profile accessible using the customer’s cell phone number. A commercial explaining the service is available on the bottom of this page.

Caterina Fake, co-founder of Flickr and Hunch.com, suggested that personalized recommendations will be made using all of our search data and profiles. Fake says that specific search results, similar to Amazon.com’s recommendations based on your previous purchases, will be used to customize the web to each user. This customization is already happening based on Eli Pariser’s TED Talk, Beware online “filter bubbles”. Pariser says that Google uses 57 signals to customize your search results.

I can see signs of the next step of the Internet in content created solely based on published content. For example, I searched for the term “best horror movies” and in the list of results, Google had pulled content from various sites and compiled its suggestions. The Shining, Alien, Halloween and The Exorcist make Google’s compiled list.  

Is this Web 3.0? Maybe, or it could be Web 2.0 S.

References

Fake, C. (Interview subject). (2009). Spark. (Podcast). Episode 96. Toronto, Ontario: CBC Radio.

O'Reilly, T. (2005). What is web 2.0. Retrieved February 10, 2012, from  http://oreilly.com/pub/a/web2/archive/what-is-web-20.html?page=3

Pariser, E. (2011). Beware online “filter bubbles”.  TED.com [Online video]. Retrieved February 10, 2012, from http://blog.ted.com/2011/05/02/beware-online-filter-bubbles-eli-pariser-on-ted-com/

Rogers. (2012). One number [Commercial]. Retrieved February 10, 2012, from https://www.rogersonenumber.ca/welcome/?locale=en_CA#

4 comments:

  1. Here's an interesting support to your Web 3.0 post - take the quiz and read the pages after:

    http://computer.howstuffworks.com/web-30-quiz.htm

    ReplyDelete
  2. Hillary, thanks for pointing us toward Eli Pariser's excellent TED talk. One of the discussions that often comes up around Web 2.0 is the democratization of journalism - the ability of everyone to share their stories and knowledge. One of the drawbacks of this democratization, though, is the loss of the filtering role that professional journalists have traditionally performed. With traditional media, reported news could generally be relied upon for accuracy, even when the filtering meant that we didn't get to hear all points of view or all the stories. We relied on journalists to decide which stories were important and to write them well for us.
    I was really struck by Pariser's theory that, with Web 2.0, there's a "passing of the torch" from the human editor to the algorithmic editor. The filter provided by the human editor is being replaced by a filter based on algorithms. I think Pariser's point that algorithms aren't subject to journalistic ethics is a strong one. With algorithms as the gatekeepers of information, we may end up with a smaller and smaller funnel of information when we search. We end up not being aware of what we might be missing.
    A similar concern has been raised by Siva Vaidhayanathan in his book the Googlization of Everything. He argues that the convenience of Google's highly effective search algorithms is so seductive that we don't question whether, when we search, we're getting the whole picture. As he points out, Google's algorithmic approach is geared more for consumption than it is for knowledge gathering - it's great for shopping, but not so great for research.
    Siva Vaidhayanathan can be heard in discussion at:
    http://www.cbc.ca/spark/2011/05/full-interview-siva-vaidhyanathan-on-the-googlization-of-everything/

    ReplyDelete
  3. Thanks LJ, I earned an 8 out of 10!

    Great points Komori, and an excellent interview of Siva Vaidhyanathan. During the interview, Siva says explains part of Google's biases but I would also argue that our traditional institutions, such as libraries, are also biased. Canadian libraries contain almost all English books even though there may be a better book covering the same subject in another language, but due to accessibility, the bias leads the library to select the English version. Is that the best choice? I think the argument can be made for both sides.

    ReplyDelete
    Replies
    1. Good point about traditional institutions also being biased, Hillary. I suppose that, no matter where we seek information, we need to be aware of editorial, curatorial, or filtering biases - which applies as much to libraries and newspapers as it does to Google.

      Delete