'웹 2.0'에 해당되는 글 4건

  1. 2007.04.28 태터데스크 베타테스터 신청 (9)
  2. 2007.04.15 야그 3.0 공개발표회 후기.. (2)
  3. 2007.04.05 What Is Web 2.0
  4. 2006.11.26 나의 Blog는 Web 2.0에 얼마나 가까운가... (2)
2007.04.28 00:27
<<정답>>태터데스크는 내 블로그 첫페이지를 꾸밀 수 있는 가장 간단하고 예쁜 방법입니다.


태터툴즈에서 블로그의 첫페이지를 사용자가 꾸밀수 있도록 만드는 서비스를 실시할려나보다..

베타테스터를 구한다고 해서 이렇게 신청을 한다..

관련공지 :: http://notice.tistory.com/761

뭐 사실.. 닌텐도 DSLite <-- 요것도 내심 땡기긴 한다..
얼마나 열심히 내가 테스트를 할지는 모르겠지만.. 으흐흐
받을수있다면 좋은거 아니겠어??

이번 블로그는 아무래도 Web 2.0을 지향하는 블로그인것 같다..
AJAX 기술을 사용해서 사용자가 원하는 대로 꾸밀수 있도록 한것인가?
뭐.. 구글이나 야후에서 사용자 개인이 대문을 꾸밀수있는것처럼..
네이버에서도 블로그를 자신이 꾸밀수있도록 이미 서비스중이기도 하고..
뭐.. 다른기능이 있는지는 사용해봐야알겠지만..
태터툴즈에서 이런 서비스를 제공해준다는 것은 정말 반길만한 일이다..

그나저나 지난번 글로썼던 트랙백 문제나 좀 해결해줬으면 좋겠는데..
당췌 어디다가 얘길해야하는거여~
태터툴즈에서는 사용자의 건의사항을 들을수있는 창구를 만들어놔야한다는 걸 알아야해!!
암튼~ 좀 수정해줘요~ 제발~~ ㅜ.ㅜ

트랙백문제 관련글 :: http://luckydevil.tistory.com/231

'앙마의 감성 > 일상다반사' 카테고리의 다른 글

귀차니즘과 티스토리 스킨 변경  (3) 2007.07.03
[잡담] 이상하군..  (0) 2007.06.12
태터데스크 베타테스터 신청  (9) 2007.04.28
구글 애드센스 수입현황  (5) 2007.04.11
구글 애드센스의 사용  (2) 2007.04.10
구글 애드센스 신청..  (2) 2007.04.09
Posted by 열라착한앙마

댓글을 달아 주세요

  1. BlogIcon 2007.05.01 11:15  댓글주소  수정/삭제  댓글쓰기

    될거 같애? 훗

  2. BlogIcon 2007.05.01 14:11 신고  댓글주소  수정/삭제  댓글쓰기

    훗 둘다 안됐더군.
    다 니들만 신청해서 그래..ㅋㅋㅋ

  3. BlogIcon 호밀 2007.05.08 08:51 신고  댓글주소  수정/삭제  댓글쓰기

    저도 실패 -ㅅ-;;;;;;;;;

  4. BlogIcon lesbian touching 2008.03.13 06:04  댓글주소  수정/삭제  댓글쓰기

    중대한 위치 축하!경이롭 위치!

2007.04.15 19:36

2007년 4월 13일 쇠의날 늦은 7시 역사적인 날로 기록될지도 모르는 날.. ㅋㅋ
야그의 공개발표회에 참석하게 되었다..


어쩌면 우리는 역사적인 날의 주인공으로 그자리에 참석했을지도 모른다는
이현봉님의 의미심장한 말씀의 인용을 시작으로 아주 간단한 후기를 작성해본다..

참고로.. 카메라 안가져가서 사진은 없다.. ㅡㅡ;;
(나름 사진찍기를 취미생활로 하는데 카메라가 저멀리.. 집에 있는 관계로..
집에는 1주일에 하루정도밖에 안가서뤼 ㅡㅡ;; 아흠 게으른 찍사;;;)


처음은 이현봉님의 Yag의 개발이 시작하게된 배경에 대해서 들었다..

3년여에 걸친 연구와 개발 그 노력이 왜 필요했는지..
뭐.. 그 내용을 요약하기란 너무도 어렵지만..

결국 웹상에 사용자들을 보이도록하여 편의성을 높이고자 함이라..
사용자를 보이도록 하면 온라인상에서도 오프라인에서의 행동과 많이 비슷해진다..
간단한 예로는 사람들이 모여서 무엇인가 보고있다면 궁금함에 나도 근처를 기웃거리게 된다..
야그를 사용하게 되면 이러한 점을 오프라인상에서도 보여줄수있다..(물론 많은 보완이 필요하겠지만..)
나 이외의 사람들에 대해서 볼수있고 무엇을 보고있는지 알수있으며 접근 또한 쉽다..

또 이것은 구글이나 다른 거대 회사들이 하지 못하기 때문에(기술력이 없는것이 아님..)
마이엔진(야그를 개발한 회사) 에서 개발한다는 뭐.. 그런얘기이다..
보충설명한다면..
사실 현재 최고의 검색기술이라 할 수 있는 구글 검색같은 경우는 현재의 이슈가 되는 점을 검색할 수는 없다..
왜냐하면 구글의 검색은 사용자들이 그동안 많이 사용하고(링크등) 본 페이지의 랭크를 높이 평가하는데
새로나온 이슈가 그렇게는 표시될 수 없기 때문이다..
즉, 평소에 정확성 높은 글을 검색하기는 쉬울지 몰라도 현재의 이슈에 대해서 검색하기는 힘들다는 단점이 있다...
이러한 것들을 자신과 같은 회사들을 할수 있다.. 뭐 그런얘기가 되겠다..
이해안되면 패스하자.. 쩝;; (글을 못쓰는 내가 죄인이라 ㅜ.ㅜ)


다음으로는 말로만 듣고 글로만 만나던 김중태님..

야그의 기능적 특성과 이로인한 효과등에 대한 설명이었다..
(원래는 발표 ppt가 있었지만 시간관계상..)

내용을 요약하기는 힘들고 느낌을 쭉 정리해보면..

먼저 웹의 발전 방향은..
지금의 시맨틱웹에서 오프라인의 개념이 추가된 쉬운웹, 밝은웹으로 발전한다..
즉, 온라인도 오프라인과 같은 방식으로 행동하여 학습이 따로 필요치 않도록 하는 방향으로 발전한다는 의미이다.. (비교자료는 panic.com .VS. Yes24, 알라딘 등)

1. 인터넷 + 하이퍼텍스트  ==>  Web
2. Web + GUI  ==>  Web의 대중성 증대
3. Web + GUI + Infrastructure  ==> Web 2.0, SemanticWeb..
4. Web + GUI + Infra + Offline  ==> 쉬운웹, 밝은웹


구글, 유투브의 성공 기반은
분산의 개념을 적용한 것이다.
다른 시각에서 보면 이것은 사용자의 오프라인상의 행동패턴을 온라인에 적용한 것이라는 점이다..
구글은 현재 광고를 분산시켰으며(구글 애드센스) 유투브는 동영상을 퍼가도록 하여 동영상 컨텐츠를 분산시켰다. 즉, 자신들의 페이지에 직접오지 않아도 되도록 한것이며, 이것은 오프라인상의 개념으로 본다면 당연한 것이다.
예를들면, 오프라인에서는 자신이 구매한 음반에 대해서는
자신의 집에서 가져가서 필요할 때 꺼내서 듣는다는 것이 당연하다..
근데 왜 온라인상에서는 자신이 구매한 음반을 그쪽 사이트에 가서 듣는것인가..?
이와같이 필요하면 가져가서 자신이 편한곳에 두어서 듣고 싶을때 꺼내서 들어라는 개념이 바로 유투브의 개념이라는 것이다.

또 panic.com과 같이 사용자가 구매하는 과정이 오프라인과 비슷하도록 하여 사용자가 학습이 필요하지 않도록 하여야 하는데 왜 지금의 페이지들은 불편하기 짝이 없는가..

무슨소리냐고?
구매하는 페이지는 너무 복잡해서 어떻게해야 구매를 해야할지 모르며, 구매버튼을 찾기도 힘들고
또 내가 담아둔 물건들은 어떤것이 있는지 확인하기 힘들고
(장바구니 페이지로 이동해야 확인이 가능하고 구쇼핑하면서는 확인 불가;;)
결국 상품의 구매를 위해서는 사용자가 해당 페이지에 익숙해져야하는 일련의 학습과정이 필요하다는 것
이것이 결국 사용자의 편의성을 저해하는 요소라는 것이다.

결국 웹은 사용자의 편의를 위해서라도 쉬운웹으로 발전할 것이라는 것이다.
즉, 오프라인의 행동을 온라인에서도 가능하도록 하게 해야한다는 것이겠다...
그러한 과정중의 하나로 온라인상에서 사용자의 존재를 나타나게 해주는 것은 여러가지로 큰 의미를 가진다... 이것은 오프라인과 비슷한 환경을 조성해주는 기초적인 과정이기 때문이다.
사용자가 보임으로 인해서 어떠한 행동이나 어디에 있는지 알수 있게되며 이로 인해서 사용자들간의 교류가 가능하다.
머.. 여기서도 대충 예를 들면..(모두다~ 발표회에서 들은 예이지요 ㅎㅎ;;)
쇼핑몰에서 관리자는 현재 접속한 사용자들이 무엇을 보고 있는지 어느 페이지에 있는지를 알수 있게되며
야그와 같은 프로그램에 있는 메신저를 통해서 고객과 대화가 가능하다...
따라서 고객과의 타협도 가능하며 대화를 통해 무엇이 문제이고 무엇이 필요한지를 파악할 수도 있게 되겠다.. (이는 오프라인상의 모습과 비슷함을 알수있지?)
또, 앞에서 말한 것처럼 사람들이 무언가 비슷한 주제를 가지고 얘기하고 있다면 그것을 사회적으로 이슈가 되고 있다는 것을 파악할수도있고(구글이 못하는 점을 보완할 수 있다는 점을 강조하더라.. 뭐.. 지금으로써는 좀 힘들수도 있지만, 어느정도 보완하고 보급만 잘된다면 가능한 얘기겠다..) 또, 그것에 대해서 물어볼수도 있고.. 마치 오프라인상에서 모여있는 사람들의 틈에 껴서 질문하고 정보를 얻고 대화하는 과정과 비슷하다고 할수 있겠다..

이런것 이외에도 생각해보면 많은 점들이 달라질수 있다는 것에 나름대로 동의하며 한마디로 정리해보면
야그는 결국 웹에 사용자의 Visuality를 표현한 방법인 것이다..(이것의 효과는 위에서 간단히 얘기했고..)

아직 보완해야할 것은 많은 것 같다..
가장 중요한 문제는 이것을 사람들에게 배포해야한다는 점.. 많은 사람이 사용하지 않는다면 이것은 실패다!!
따라서 배포가 어떻게든 쉽게 되어야 한다..
또, 현재의 방식대로라면 자신들의 웹페이지 일부를 야그를 위해 내놓아야하는데..
과연 사용자들이 그 자리를 내 놓을수 있을지도 문제이다

다음은 같이간 댕이가 제시한
만약 야그와 같은 프로그램이 또 나온다면 문제가 될 소지가 있다..
그 프로그램들과 호환성이 없다면 결국 이것도 문제이기 때문이다..
그렇다면 이들을 인정고 서로 호환이 가능하도록 미리 개발의 표준안을 제시해야하는 것은 아닌가..


아무튼 이번 발표회를 참석하게 되어서
나름대로 얻은것이 많다..
가장 큰 깨달음은 오프라인이라는 개념이다!!
현재 웹을 공부하는 사람으로써 아직 난 배워야 할 것이 너무도 많다는 것을 알았다..
뒷풀이를 참석하지 못해서 조금더 가깝게(?) 지식을 공유하지 못한 아쉬움이 좀 남는다.. ㅋ
(음.. 시간이 좀;; 다음엔 한 8시경에 뒷풀이가 가능했으면 좋으련만..머.. 핑계입니다 ㅡㅡ;;)
어짜피 약속도 있었기때문에 ㅋㅋ

자세한 프로그램에 대한 설명은 김중태문화원의 글을 참고하여 주시고
야그 홈페이지를 참고해주셔도 좋을 것 같습니다...^^


Follow Your Heart, Stay Foolish!!

Posted by 열라착한앙마

댓글을 달아 주세요

  1. BlogIcon 2007.04.16 10:47 신고  댓글주소  수정/삭제  댓글쓰기

    현재의 이슈에 대해서 검색은 많이 듣던 얘기군... ㅎㅎ
    세미나를 안들어서 그런지 쉬운웹이랑 야그와의 상관관계는 잘모르겠구만..ㅋ

    • BlogIcon 열라착한앙마 2007.04.16 14:16 신고  댓글주소  수정/삭제

      아흠..;;
      나의 부족한 문장실력으로는
      그 2시간을 글로 표현하기가 힘드네 ㅋㅋ
      뭐.. 나중에 기회가 된다면 조금더 정리하던지
      얘기를 해주던지 할께^^
      내가 이렇다우~ ㅋㅋ

2007.04.05 15:13

Design Patterns and Business Models for the Next Generation of Software

by Tim O'Reilly
09/30/2005


The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. Many people concluded that the web was overhyped, when in fact bubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum's rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.

The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.

In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.

This article is an attempt to clarify just what we mean by Web 2.0.

In our initial brainstorming, we formulated our sense of Web 2.0 by example:

Web 1.0   Web 2.0
DoubleClick --> Google AdSense
Ofoto --> Flickr
Akamai --> BitTorrent
mp3.com --> Napster
Britannica Online --> Wikipedia
personal websites --> blogging
evite --> upcoming.org and EVDB
domain name speculation --> search engine optimization
page views --> cost per click
screen scraping --> web services
publishing --> participation
content management systems --> wikis
directories (taxonomy) --> tagging ("folksonomy")
stickiness --> syndication

The list went on and on. But what was it that made us identify one application or approach as "Web 1.0" and another as "Web 2.0"? (The question is particularly urgent because the Web 2.0 meme has become so widespread that companies are now pasting it on as a marketing buzzword, with no real understanding of just what it means. The question is particularly difficult because many of those buzzword-addicted startups are definitely not Web 2.0, while some of the applications we identified as Web 2.0, like Napster and BitTorrent, are not even properly web applications!) We began trying to tease out the principles that are demonstrated in one way or another by the success stories of web 1.0 and by the most interesting of the new applications.

1. The Web As Platform

Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.

Web2MemeMap

Figure 1 shows a "meme map" of Web 2.0 that was developed at a brainstorming session during FOO Camp, a conference at O'Reilly Media. It's very much a work in progress, but shows the many ideas that radiate out from the Web 2.0 core.

For example, at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was "The web as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of it as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion.

Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Both DoubleClick and Akamai were Web 2.0 pioneers, yet we can also see how it's possible to realize more of the possibilities by embracing additional Web 2.0 design patterns.

Let's drill down for a moment into each of these three cases, teasing out some of the essential elements of difference.

Netscape vs. Google

If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. So let's start with a comparison of these two companies and their positioning.

Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.

In the end, both web browsers and web servers turned out to be commodities, and value moved "up the stack" to services delivered over the web platform.

Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus homegrown applications and utilities that no one outside the company ever gets to see.

At bottom, Google requires a competency that Netscape never needed: database management. Google isn't just a collection of software tools, it's a specialized database. Without the data, the tools are useless; without the software, the data is unmanageable. Software licensing and control over APIs--the lever of power in the previous era--is irrelevant because the software never need be distributed but only performed, and also because without the ability to collect and manage the data, the software is of little use. In fact, the value of the software is proportional to the scale and dynamism of the data it helps to manage.

Google's service is not a server--though it is delivered by a massive collection of internet servers--nor a browser--though it is experienced by the user within the browser. Nor does its flagship search service even host the content that it enables users to find. Much like a phone call, which happens not just on the phones at either end of the call, but on the network in between, Google happens in the space between browser and search engine and destination content server, as an enabler or middleman between the user and his or her online experience.

While both Netscape and Google could be described as software companies, it's clear that Netscape belonged to the same software world as Lotus, Microsoft, Oracle, SAP, and other companies that got their start in the 1980's software revolution, while Google's fellows are other internet applications like eBay, Amazon, Napster, and yes, DoubleClick and Akamai.



DoubleClick vs. Overture and AdSense

Like Google, DoubleClick is a true child of the internet era. It harnesses software as a service, has a core competency in data management, and, as noted above, was a pioneer in web services long before web services even had a name. However, DoubleClick was ultimately limited by its business model. It bought into the '90s notion that the web was about publishing, not participation; that advertisers, not consumers, ought to call the shots; that size mattered, and that the internet was increasingly being dominated by the top websites as measured by MediaMetrix and other web ad scoring companies.

As a result, DoubleClick proudly cites on its website "over 2000 successful implementations" of its software. Yahoo! Search Marketing (formerly Overture) and Google AdSense, by contrast, already serve hundreds of thousands of advertisers apiece.

Overture and Google's success came from an understanding of what Chris Anderson refers to as "the long tail," the collective power of the small sites that make up the bulk of the web's content. DoubleClick's offerings require a formal sales contract, limiting their market to the few thousand largest websites. Overture and Google figured out how to enable ad placement on virtually any web page. What's more, they eschewed publisher/ad-agency friendly advertising formats such as banner ads and popups in favor of minimally intrusive, context-sensitive, consumer-friendly text advertising.

The Web 2.0 lesson: leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.

A Platform Beats an Application Every Time

In each of its past confrontations with rivals, Microsoft has successfully played the platform card, trumping even the most dominant applications. Windows allowed Microsoft to displace Lotus 1-2-3 with Excel, WordPerfect with Word, and Netscape Navigator with Internet Explorer.

This time, though, the clash isn't between a platform and an application, but between two platforms, each with a radically different business model: On the one side, a single software provider, whose massive installed base and tightly integrated operating system and APIs give control over the programming paradigm; on the other, a system without an owner, tied together by a set of protocols, open standards and agreements for cooperation.

Windows represents the pinnacle of proprietary control via software APIs. Netscape tried to wrest control from Microsoft using the same techniques that Microsoft itself had used against other rivals, and failed. But Apache, which held to the open standards of the web, has prospered. The battle is no longer unequal, a platform versus a single application, but platform versus platform, with the question being which platform, and more profoundly, which architecture, and which business model, is better suited to the opportunity ahead.

Windows was a brilliant solution to the problems of the early PC era. It leveled the playing field for application developers, solving a host of problems that had previously bedeviled the industry. But a single monolithic approach, controlled by a single vendor, is no longer a solution, it's a problem. Communications-oriented systems, as the internet-as-platform most certainly is, require interoperability. Unless a vendor can control both ends of every interaction, the possibilities of user lock-in via software APIs are limited.

Any Web 2.0 vendor that seeks to lock in its application gains by controlling the platform will, by definition, no longer be playing to the strengths of the platform.

This is not to say that there are not opportunities for lock-in and competitive advantage, but we believe they are not to be found via control over software APIs and protocols. There is a new game afoot. The companies that succeed in the Web 2.0 era will be those that understand the rules of that game, rather than trying to go back to the rules of the PC software era.

Not surprisingly, other web 2.0 success stories demonstrate this same behavior. eBay enables occasional transactions of only a few dollars between single individuals, acting as an automated intermediary. Napster (though shut down for legal reasons) built its network not by building a centralized song database, but by architecting a system in such a way that every downloader also became a server, and thus grew the network.

Akamai vs. BitTorrent

Like DoubleClick, Akamai is optimized to do business with the head, not the tail, with the center, not the edges. While it serves the benefit of the individuals at the edge of the web by smoothing their access to the high-demand sites at the center, it collects its revenue from those central sites.

BitTorrent, like other pioneers in the P2P movement, takes a radical approach to internet decentralization. Every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of downloaders to provide both bandwidth and data to other users. The more popular the file, in fact, the faster it can be served, as there are more users providing bandwidth and fragments of the complete file.

BitTorrent thus demonstrates a key Web 2.0 principle: the service automatically gets better the more people use it. While Akamai must add servers to improve service, every BitTorrent consumer brings his own resources to the party. There's an implicit "architecture of participation", a built-in ethic of cooperation, in which the service acts primarily as an intelligent broker, connecting the edges to each other and harnessing the power of the users themselves.

2. Harnessing Collective Intelligence

The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the Web 2.0 era appears to be this, that they have embraced the power of the web to harness collective intelligence:

  • Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it. Much as synapses form in the brain, with associations becoming stronger through repetition or intensity, the web of connections grows organically as an output of the collective activity of all web users.
  • Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
  • Google's breakthrough in search, which quickly made it the undisputed search market leader, was PageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
  • eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen. What's more, eBay's competitive advantage comes almost entirely from the critical mass of buyers and sellers, which makes any new entrant offering similar services significantly less attractive.
  • Amazon sells the same products as competitors such as Barnesandnoble.com, and they receive the same product descriptions, cover images, and editorial content from their vendors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. While a Barnesandnoble.com search is likely to lead with the company's own products, or sponsored results, Amazon always leads with "most popular", a real-time computation based not only on sales but other factors that Amazon insiders call the "flow" around products. With an order of magnitude more user participation, it's no surprise that Amazon's sales also outpace competitors.

Now, innovative companies that pick up on this insight and perhaps extend it even further, are making their mark on the web:

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust, applying Eric Raymond's dictum (originally coined in the context of open source software) that "with enough eyeballs, all bugs are shallow," to content creation. Wikipedia is already in the top 100 websites, and many think it will be in the top ten before long. This is a profound change in the dynamics of content creation!
  • Sites like del.icio.us and Flickr, two companies that have received a great deal of attention of late, have pioneered a concept that some people call "folksonomy" (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, rather than rigid categories. In the canonical example, a Flickr photo of a puppy might be tagged both "puppy" and "cute"--allowing for retrieval along natural axes generated user activity.
  • Collaborative spam filtering products like Cloudmark aggregate the individual decisions of email users about what is and is not spam, outperforming systems that rely on analysis of the messages themselves.
  • It is a truism that the greatest internet success stories don't advertise their products. Their adoption is driven by "viral marketing"--that is, recommendations propagating directly from one user to another. You can almost make the case that if a site or product relies on advertising to get the word out, it isn't Web 2.0.
  • Even much of the infrastructure of the web--including the Linux, Apache, MySQL, and Perl, PHP, or Python code involved in most web servers--relies on the peer-production methods of open source, in themselves an instance of collective, net-enabled intelligence. There are more than 100,000 open source software projects listed on SourceForge.net. Anyone can add a project, anyone can download and use the code, and new projects migrate from the edges to the center as a result of users putting them to work, an organic software adoption process relying almost entirely on viral marketing.

The lesson: Network effects from user contributions are the key to market dominance in the Web 2.0 era.


Blogging and the Wisdom of Crowds

One of the most highly touted features of the Web 2.0 era is the rise of blogging. Personal home pages have been around since the early days of the web, and the personal diary and daily opinion column around much longer than that, so just what is the fuss all about?

At its most basic, a blog is just a personal home page in diary format. But as Rich Skrenta notes, the chronological organization of a blog "seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain."

One of the things that has made a difference is a technology called RSS. RSS is the most significant advance in the fundamental architecture of the web since early hackers realized that CGI could be used to create database-backed websites. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this "the incremental web." Others call it the "live web".

Now, of course, "dynamic websites" (i.e., database-backed sites with dynamically generated content) replaced static web pages well over ten years ago. What's dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with "permalinks" for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.

The Architecture of Participation

Some systems are designed to encourage participation. In his paper, The Cornucopia of the Commons, Dan Bricklin noted that there are three ways to build a large database. The first, demonstrated by Yahoo!, is to pay people to do it. The second, inspired by lessons from the open source community, is to get volunteers to perform the same task. The Open Directory Project, an open source Yahoo competitor, is the result. But Napster demonstrated a third way. Because Napster set its defaults to automatically serve any music that was downloaded, every user automatically helped to build the value of the shared database. This same approach has been followed by all other P2P file sharing services.

One of the key lessons of the Web 2.0 era is this: Users add value. But only a small percentage of users will go to the trouble of adding value to your application via explicit means. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application. As noted above, they build systems that get better the more people use them.

Mitch Kapor once noted that "architecture is politics." Participation is intrinsic to Napster, part of its fundamental architecture.

This architectural insight may also be more central to the success of open source software than the more frequently cited appeal to volunteerism. The architecture of the internet, and the World Wide Web, as well as of open source software projects like Linux, Apache, and Perl, is such that users pursuing their own "selfish" interests build collective value as an automatic byproduct. Each of these projects has a small core, well-defined extension mechanisms, and an approach that lets any well-behaved component be added by anyone, growing the outer layers of what Larry Wall, the creator of Perl, refers to as "the onion." In other words, these technologies demonstrate network effects, simply through the way that they have been designed.

These projects can be seen to have a natural architecture of participation. But as Amazon demonstrates, by consistent effort (as well as economic incentives such as the Associates program), it is possible to overlay such an architecture on a system that would not normally seem to possess it.

RSS also means that the web browser is not the only means of viewing a web page. While some RSS aggregators, such as Bloglines, are web-based, others are desktop clients, and still others allow users of portable devices to subscribe to constantly updated content.

RSS is now being used to push not just notices of new blog entries, but also all kinds of data updates, including stock quotes, weather data, and photo availability. This use is actually a return to one of its roots: RSS was born in 1997 out of the confluence of Dave Winer's "Really Simple Syndication" technology, used to push out blog updates, and Netscape's "Rich Site Summary", which allowed users to create custom Netscape home pages with regularly updated data flows. Netscape lost interest, and the technology was carried forward by blogging pioneer Userland, Winer's company. In the current crop of applications, we see, though, the heritage of both parents.

But RSS is only part of what makes a weblog different from an ordinary web page. Tom Coates remarks on the significance of the permalink:

It may seem like a trivial piece of functionality now, but it was effectively the device that turned weblogs from an ease-of-publishing phenomenon into a conversational mess of overlapping communities. For the first time it became relatively easy to gesture directly at a highly specific post on someone else's site and talk about it. Discussion emerged. Chat emerged. And - as a result - friendships emerged or became more entrenched. The permalink was the first - and most successful - attempt to build bridges between weblogs.

In many ways, the combination of RSS and permalinks adds many of the features of NNTP, the Network News Protocol of the Usenet, onto HTTP, the web protocol. The "blogosphere" can be thought of as a new, peer-to-peer equivalent to Usenet and bulletin-boards, the conversational watering holes of the early internet. Not only can people subscribe to each others' sites, and easily link to individual comments on a page, but also, via a mechanism known as trackbacks, they can see when anyone else links to their pages, and can respond, either with reciprocal links, or by adding comments.

Interestingly, two-way links were the goal of early hypertext systems like Xanadu. Hypertext purists have celebrated trackbacks as a step towards two way links. But note that trackbacks are not properly two-way--rather, they are really (potentially) symmetrical one-way links that create the effect of two way links. The difference may seem subtle, but in practice it is enormous. Social networking systems like Friendster, Orkut, and LinkedIn, which require acknowledgment by the recipient in order to establish a connection, lack the same scalability as the web. As noted by Caterina Fake, co-founder of the Flickr photo sharing service, attention is only coincidentally reciprocal. (Flickr thus allows users to set watch lists--any user can subscribe to any other user's photostream via RSS. The object of attention is notified, but does not have to approve the connection.)

If an essential part of Web 2.0 is harnessing collective intelligence, turning the web into a kind of global brain, the blogosphere is the equivalent of constant mental chatter in the forebrain, the voice we hear in all of our heads. It may not reflect the deep structure of the brain, which is often unconscious, but is instead the equivalent of conscious thought. And as a reflection of conscious thought and attention, the blogosphere has begun to have a powerful effect.

First, because search engines use link structure to help predict useful pages, bloggers, as the most prolific and timely linkers, have a disproportionate role in shaping search engine results. Second, because the blogging community is so highly self-referential, bloggers paying attention to other bloggers magnifies their visibility and power. The "echo chamber" that critics decry is also an amplifier.

If it were merely an amplifier, blogging would be uninteresting. But like Wikipedia, blogging harnesses collective intelligence as a kind of filter. What James Suriowecki calls "the wisdom of crowds" comes into play, and much as PageRank produces better results than analysis of any individual document, the collective attention of the blogosphere selects for value.

While mainstream media may see individual blogs as competitors, what is really unnerving is that the competition is with the blogosphere as a whole. This is not just a competition between sites, but a competition between business models. The world of Web 2.0 is also the world of what Dan Gillmor calls "we, the media," a world in which "the former audience", not a few people in a back room, decides what's important.

3. Data is the Next Intel Inside

Every significant internet application to date has been backed by a specialized database: Google's web crawl, Yahoo!'s directory (and web crawl), Amazon's database of products, eBay's database of products and sellers, MapQuest's map databases, Napster's distributed song database. As Hal Varian remarked in a personal conversation last year, "SQL is the new HTML." Database management is a core competency of Web 2.0 companies, so much so that we have sometimes referred to these applications as "infoware" rather than merely software.

This fact leads to a key question: Who owns the data?

In the internet era, one can already see a number of cases where control over the database has led to market control and outsized financial returns. The monopoly on domain name registry initially granted by government fiat to Network Solutions (later purchased by Verisign) was one of the first great moneymakers of the internet. While we've argued that business advantage via controlling software APIs is much more difficult in the age of the internet, control of key data sources is not, especially if those data sources are expensive to create or amenable to increasing returns via network effects.

Look at the copyright notices at the base of every map served by MapQuest, maps.yahoo.com, maps.msn.com, or maps.google.com, and you'll see the line "Maps copyright NavTeq, TeleAtlas," or with the new satellite imagery services, "Images copyright Digital Globe." These companies made substantial investments in their databases (NavTeq alone reportedly invested $750 million to build their database of street addresses and directions. Digital Globe spent $500 million to launch their own satellite to improve on government-supplied imagery.) NavTeq has gone so far as to imitate Intel's familiar Intel Inside logo: Cars with navigation systems bear the imprint, "NavTeq Onboard." Data is indeed the Intel Inside of these applications, a sole source component in systems whose software infrastructure is largely open source or otherwise commodified.

The now hotly contested web mapping arena demonstrates how a failure to understand the importance of owning an application's core data will eventually undercut its competitive position. MapQuest pioneered the web mapping category in 1995, yet when Yahoo!, and then Microsoft, and most recently Google, decided to enter the market, they were easily able to offer a competing application simply by licensing the same data.

Contrast, however, the position of Amazon.com. Like competitors such as Barnesandnoble.com, its original database came from ISBN registry provider R.R. Bowker. But unlike MapQuest, Amazon relentlessly enhanced the data, adding publisher-supplied data such as cover images, table of contents, index, and sample material. Even more importantly, they harnessed their users to annotate the data, such that after ten years, Amazon, not Bowker, is the primary source for bibliographic data on books, a reference source for scholars and librarians as well as consumers. Amazon also introduced their own proprietary identifier, the ASIN, which corresponds to the ISBN where one is present, and creates an equivalent namespace for products without one. Effectively, Amazon "embraced and extended" their data suppliers.

Imagine if MapQuest had done the same thing, harnessing their users to annotate maps and directions, adding layers of value. It would have been much more difficult for competitors to enter the market just by licensing the base data.

The recent introduction of Google Maps provides a living laboratory for the competition between application vendors and their data suppliers. Google's lightweight programming model has led to the creation of numerous value-added services in the form of mashups that link Google Maps with other internet-accessible data sources. Paul Rademacher's housingmaps.com, which combines Google Maps with Craigslist apartment rental and home purchase data to create an interactive housing search tool, is the pre-eminent example of such a mashup.

At present, these mashups are mostly innovative experiments, done by hackers. But entrepreneurial activity follows close behind. And already, one can see that for at least one class of developer, Google has taken the role of data source away from Navteq and inserted themselves as a favored intermediary. We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.

The race is on to own certain classes of core data: location, identity, calendaring of public events, product identifiers and namespaces. In many cases, where there is significant cost to create the data, there may be an opportunity for an Intel Inside style play, with a single source for the data. In others, the winner will be the company that first reaches critical mass via user aggregation, and turns that aggregated data into a system service.

For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users of communications systems, may all be legitimate contenders to build a network-wide identity database. (In this regard, Google's recent attempt to use cell phone numbers as an identifier for Gmail accounts may be a step towards embracing and extending the phone system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB is an attempt to build the world's largest shared calendar via a wiki-style architecture of participation. While the jury's still out on the success of any particular startup or approach, it's clear that standards and solutions in these areas, effectively turning certain classes of data into reliable subsystems of the "internet operating system", will enable the next generation of applications.

A further point must be noted with regard to data, and that is user concerns about privacy and their rights to their own data. In many of the early web applications, copyright is only loosely enforced. For example, Amazon lays claim to any reviews submitted to the site, but in the absence of enforcement, people may repost the same review elsewhere. However, as companies begin to realize that control over data may be their chief source of competitive advantage, we may see heightened attempts at control.

Much as the rise of proprietary software led to the Free Software movement, we expect the rise of proprietary databases to result in a Free Data movement within the next decade. One can see early signs of this countervailing trend in open data projects such as Wikipedia, the Creative Commons, and in software projects like Greasemonkey, which allow users to take control of how data is displayed on their computer.



4. End of the Software Release Cycle

As noted above in the discussion of Google vs. Netscape, one of the defining characteristics of internet era software is that it is delivered as a service, not as a product. This fact leads to a number of fundamental changes in the business model of such a company:

  1. Operations must become a core competency. Google's or Yahoo!'s expertise in product development must be matched by an expertise in daily operations. So fundamental is the shift from software as artifact to software as service that the software will cease to perform unless it is maintained on a daily basis. Google must continuously crawl the web and update its indices, continuously filter out link spam and other attempts to influence its results, continuously and dynamically respond to hundreds of millions of asynchronous user queries, simultaneously matching them with context-appropriate advertisements.

    It's no accident that Google's system administration, networking, and load balancing techniques are perhaps even more closely guarded secrets than their search algorithms. Google's success at automating these processes is a key part of their cost advantage over competitors.

    It's also no accident that scripting languages such as Perl, Python, PHP, and now Ruby, play such a large role at web 2.0 companies. Perl was famously described by Hassan Schroeder, Sun's first webmaster, as "the duct tape of the internet." Dynamic languages (often called scripting languages and looked down on by the software engineers of the era of software artifacts) are the tool of choice for system and network administrators, as well as application developers building dynamic systems that require constant change.

  2. Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often" in fact has morphed into an even more radical position, "the perpetual beta," in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time.

    Real time monitoring of user behavior to see just which new features are used, and how they are used, thus becomes another required core competency. A web developer at a major online service remarked: "We put up two or three new features on some part of the site every day, and if users don't adopt them, we take them down. If they like them, we roll them out to the entire site."

    Cal Henderson, the lead developer of Flickr, recently revealed that they deploy new builds up to every half hour. This is clearly a radically different development model! While not all web applications are developed in as extreme a style as Flickr, almost all web applications have a development cycle that is radically unlike anything from the PC or client-server era. It is for this reason that a recent ZDnet editorial concluded that Microsoft won't be able to beat Google: "Microsoft's business model depends on everyone upgrading their computing environment every two to three years. Google's depends on everyone exploring what's new in their computing environment every day."

While Microsoft has demonstrated enormous ability to learn from and ultimately best its competition, there's no question that this time, the competition will require Microsoft (and by extension, every other existing software company) to become a deeply different kind of company. Native Web 2.0 companies enjoy a natural advantage, as they don't have old patterns (and corresponding business models and revenue sources) to shed.

A Web 2.0 Investment Thesis

Venture capitalist Paul Kedrosky writes: "The key is to find the actionable investments where you disagree with the consensus". It's interesting to see how each Web 2.0 facet involves disagreeing with the consensus: everyone was emphasizing keeping data private, Flickr/Napster/et al. make it public. It's not just disagreeing to be disagreeable (pet food! online!), it's disagreeing where you can build something out of the differences. Flickr builds communities, Napster built breadth of collection.

Another way to look at it is that the successful companies all give up something expensive but considered critical to get something valuable for free that was once expensive. For example, Wikipedia gives up central editorial control in return for speed and breadth. Napster gave up on the idea of "the catalog" (all the songs the vendor was selling) and got breadth. Amazon gave up on the idea of having a physical storefront but got to serve the entire world. Google gave up on the big customers (initially) and got the 80% whose needs weren't being met. There's something very aikido (using your opponent's force against them) in saying "you know, you're right--absolutely anyone in the whole world CAN update this article. And guess what, that's bad news for you."

--Nat Torkington

5. Lightweight Programming Models

Once the idea of web services became au courant, large companies jumped into the fray with a complex web services stack designed to create highly reliable programming environments for distributed applications.

But much as the web succeeded precisely because it overthrew much of hypertext theory, substituting a simple pragmatism for ideal design, RSS has become perhaps the single most widely deployed web service because of its simplicity, while the complex corporate web services stacks have yet to achieve wide deployment.

Similarly, Amazon.com's web services are provided in two forms: one adhering to the formalisms of the SOAP (Simple Object Access Protocol) web services stack, the other simply providing XML data over HTTP, in a lightweight approach sometimes referred to as REST (Representational State Transfer). While high value B2B connections (like those between Amazon and retail partners like ToysRUs) use the SOAP stack, Amazon reports that 95% of the usage is of the lightweight REST service.

This same quest for simplicity can be seen in other "organic" web services. Google's recent release of Google Maps is a case in point. Google Maps' simple AJAX (Javascript and XML) interface was quickly decrypted by hackers, who then proceeded to remix the data into new services.

Mapping-related web services had been available for some time from GIS vendors such as ESRI as well as from MapQuest and Microsoft MapPoint. But Google Maps set the world on fire because of its simplicity. While experimenting with any of the formal vendor-supported web services required a formal contract between the parties, the way Google Maps was implemented left the data for the taking, and hackers soon found ways to creatively re-use that data.

There are several significant lessons here:

  1. Support lightweight programming models that allow for loosely coupled systems. The complexity of the corporate-sponsored web services stack is designed to enable tight coupling. While this is necessary in many cases, many of the most interesting applications can indeed remain loosely coupled, and even fragile. The Web 2.0 mindset is very different from the traditional IT mindset!
  2. Think syndication, not coordination. Simple web services, like RSS and REST-based web services, are about syndicating data outwards, not controlling what happens when it gets to the other end of the connection. This idea is fundamental to the internet itself, a reflection of what is known as the end-to-end principle.
  3. Design for "hackability" and remixability. Systems like the original web, RSS, and AJAX all have this in common: the barriers to re-use are extremely low. Much of the useful software is actually open source, but even when it isn't, there is little in the way of intellectual property protection. The web browser's "View Source" option made it possible for any user to copy any other user's web page; RSS was designed to empower the user to view the content he or she wants, when it's wanted, not at the behest of the information provider; the most successful web services are those that have been easiest to take in new directions unimagined by their creators. The phrase "some rights reserved," which was popularized by the Creative Commons to contrast with the more typical "all rights reserved," is a useful guidepost.

Innovation in Assembly

Lightweight business models are a natural concomitant of lightweight programming and lightweight connections. The Web 2.0 mindset is good at re-use. A new service like housingmaps.com was built simply by snapping together two existing services. Housingmaps.com doesn't have a business model (yet)--but for many small-scale services, Google AdSense (or perhaps Amazon associates fees, or both) provides the snap-in equivalent of a revenue model.

These examples provide an insight into another key web 2.0 principle, which we call "innovation in assembly." When commodity components are abundant, you can create value simply by assembling them in novel or effective ways. Much as the PC revolution provided many opportunities for innovation in assembly of commodity hardware, with companies like Dell making a science out of such assembly, thereby defeating companies whose business model required innovation in product development, we believe that Web 2.0 will provide opportunities for companies to beat the competition by getting better at harnessing and integrating services provided by others.

6. Software Above the Level of a Single Device

One other feature of Web 2.0 that deserves mention is the fact that it's no longer limited to the PC platform. In his parting advice to Microsoft, long time Microsoft developer Dave Stutz pointed out that "Useful software written above the level of the single device will command high margins for a long time to come."

Of course, any web application can be seen as software above the level of a single device. After all, even the simplest web application involves at least two computers: the one hosting the web server and the one hosting the browser. And as we've discussed, the development of the web as platform extends this idea to synthetic applications composed of services provided by multiple computers.

But as with many areas of Web 2.0, where the "2.0-ness" is not something new, but rather a fuller realization of the true potential of the web platform, this phrase gives us a key insight into how to design applications and services for the new platform.

To date, iTunes is the best exemplar of this principle. This application seamlessly reaches from the handheld device to a massive web back-end, with the PC acting as a local cache and control station. There have been many previous attempts to bring web content to portable devices, but the iPod/iTunes combination is one of the first such applications designed from the ground up to span multiple devices. TiVo is another good example.

iTunes and TiVo also demonstrate many of the other core principles of Web 2.0. They are not web applications per se, but they leverage the power of the web platform, making it a seamless, almost invisible part of their infrastructure. Data management is most clearly the heart of their offering. They are services, not packaged applications (although in the case of iTunes, it can be used as a packaged application, managing only the user's local data.) What's more, both TiVo and iTunes show some budding use of collective intelligence, although in each case, their experiments are at war with the IP lobby's. There's only a limited architecture of participation in iTunes, though the recent addition of podcasting changes that equation substantially.

This is one of the areas of Web 2.0 where we expect to see some of the greatest change, as more and more devices are connected to the new platform. What applications become possible when our phones and our cars are not consuming data but reporting it? Real time traffic monitoring, flash mobs, and citizen journalism are only a few of the early warning signs of the capabilities of the new platform.



7. Rich User Experiences

As early as Pei Wei's Viola browser in 1992, the web was being used to deliver "applets" and other kinds of active content within the web browser. Java's introduction in 1995 was framed around the delivery of such applets. JavaScript and then DHTML were introduced as lightweight ways to provide client side programmability and richer user experiences. Several years ago, Macromedia coined the term "Rich Internet Applications" (which has also been picked up by open source Flash competitor Laszlo Systems) to highlight the capabilities of Flash to deliver not just multimedia content but also GUI-style application experiences.

However, the potential of the web to deliver full scale applications didn't hit the mainstream till Google introduced Gmail, quickly followed by Google Maps, web based applications with rich user interfaces and PC-equivalent interactivity. The collection of technologies used by Google was christened AJAX, in a seminal essay by Jesse James Garrett of web design firm Adaptive Path. He wrote:

"Ajax isn't a technology. It's really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:

Web 2.0 Design Patterns

In his book, A Pattern Language, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: "Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice."

  1. The Long Tail
    Small sites make up the bulk of the internet's content; narrow niches make up the bulk of internet's the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
  2. Data is the Next Intel Inside
    Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.
  3. Users Add Value
    The key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don't restrict your "architecture of participation" to software development. Involve your users both implicitly and explicitly in adding value to your application.
  4. Network Effects by Default
    Only a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.
  5. Some Rights Reserved. Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for "hackability" and "remixability."
  6. The Perpetual Beta
    When devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don't package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.
  7. Cooperate, Don't Control
    Web 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.
  8. Software Above the Level of a Single Device
    The PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected. Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.

AJAX is also a key component of Web 2.0 applications such as Flickr, now part of Yahoo!, 37signals' applications basecamp and backpack, as well as other Google applications such as Gmail and Orkut. We're entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications.

Interestingly, many of the capabilities now being explored have been around for many years. In the late '90s, both Microsoft and Netscape had a vision of the kind of capabilities that are now finally being realized, but their battle over the standards to be used made cross-browser applications difficult. It was only when Microsoft definitively won the browser wars, and there was a single de-facto browser standard to write to, that this kind of application became possible. And while Firefox has reintroduced competition to the browser market, at least so far we haven't seen the destructive competition over web standards that held back progress in the '90s.

We expect to see many new web applications over the next few years, both truly novel applications, and rich web reimplementations of PC applications. Every platform change to date has also created opportunities for a leadership change in the dominant applications of the previous platform.

Gmail has already provided some interesting innovations in email, combining the strengths of the web (accessible from anywhere, deep database competencies, searchability) with user interfaces that approach PC interfaces in usability. Meanwhile, other mail clients on the PC platform are nibbling away at the problem from the other end, adding IM and presence capabilities. How far are we from an integrated communications client combining the best of email, IM, and the cell phone, using VoIP to add voice capabilities to the rich capabilities of web applications? The race is on.

It's easy to see how Web 2.0 will also remake the address book. A Web 2.0-style address book would treat the local address book on the PC or phone merely as a cache of the contacts you've explicitly asked the system to remember. Meanwhile, a web-based synchronization agent, Gmail-style, would remember every message sent or received, every email address and every phone number used, and build social networking heuristics to decide which ones to offer up as alternatives when an answer wasn't found in the local cache. Lacking an answer there, the system would query the broader social network.

A Web 2.0 word processor would support wiki-style collaborative editing, not just standalone documents. But it would also support the rich formatting we've come to expect in PC-based word processors. Writely is a good example of such an application, although it hasn't yet gained wide traction.

Nor will the Web 2.0 revolution be limited to PC applications. Salesforce.com demonstrates how the web can be used to deliver software as a service, in enterprise scale applications such as CRM.

The competitive opportunity for new entrants is to fully embrace the potential of Web 2.0. Companies that succeed will create applications that learn from their users, using an architecture of participation to build a commanding advantage not just in the software interface, but in the richness of the shared data.

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we've highlighted some of the principal features of Web 2.0. Each of the examples we've explored demonstrates one or more of those key principles, but may miss others. Let's close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models

The next time a company claims that it's "Web 2.0," test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

Tim O'Reilly
O’Reilly Media, Inc., tim@oreilly.com
President and CEO



[ 출처 : http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html?page=1 ]

'앙마의 날개 > 컴퓨터공학' 카테고리의 다른 글

프로세스와 쓰레드  (0) 2008.03.04
What Is Web 2.0  (0) 2007.04.05
나의 Blog는 Web 2.0에 얼마나 가까운가...  (2) 2006.11.26
Posted by 열라착한앙마

댓글을 달아 주세요

2006.11.26 23:00

welcome to the...

Web 2.0

validator (beta version ~2.7183)

Enter a URL to find out if the site is reallyWeb 2.0:

The score for http://luckydevil.tistory.com is 8 out of 51

  • Is in public beta?  No
  • Uses python?  No
  • Uses inline AJAX ?  No
  • Uses the prefix "meta" or "micro"?  No
  • Is Shadows-aware ?  No
  • Uses Google Maps API?  No
  • Uses Cascading Style Sheets?  Yes!
  • Attempts to be XHTML Strict ?  No
  • Uses tags ?  Yes!
  • Mentions startup ?  No
  • Refers to mash-ups ?  No
  • Mentions Less is More ?  No
  • Has a Blogline blogroll ?  No
  • Appears to be non-empty ?  No
  • Appears to be web 3.0 ?  Yes!
  • Has favicon ?  Yes!
  • Refers to the Web 2.0 Validator's ruleset ?  No
  • Mentions an "architecture of participation"?  No
  • Appears to use AJAX ?  No
  • Mentions Dave Legg ?  No
  • Appears to be built using Ruby on Rails ?  No
  • Makes reference to Technorati ?  No
  • Refers to VCs ?  No
  • Has that goofy 'My Blog is Worth' link ?  No
  • Refers to Flickr ?  No
  • Mentions Ruby?  No
  • Mentions Nitro ?  No
  • Possibly contains bytes ?  Yes!
  • Links Slashdot and Digg ?  No
  • Mentions Cool Words ?  No
  • Mentions The Long Tail ?  No
  • Mentions Ruby ?  No
  • Creative Commons license ?  No
  • Has prototype.js ?  No
  • Refers to podcasting ?  Yes!
  • Appears to use MonoRail ?  No
  • Mentions RDF and the Semantic Web?  No
  • Actually mentions Web 2.0 ?  No
  • Refers to Rocketboom ?  No
  • Use Catalyst ?  No
  • Uses Semantic Markup?  Yes!
  • Refers to del.icio.us ?  No
  • Refers to web2.0validator ?  No
  • Uses microformats ?  Yes!
  • References isometric.sixsided.org?  No
  • Validates as XHTML 1.1 ?  No
  • Appears to over-punctuate ?  No
  • References Firefox?  No
  • Mentions 30 Second Rule and Web 2.0 ?  No
  • Appears to have Adsense ?  No
  • Uses the "blink" tag?  No



    --------------------------------------------------------

    나의 블로그.. 아니 TiStory에 대한 검사이겠지만...

    아무튼 결과는 51개 항목중에 8개만 Yes!!

    물론 웹 2.0이라는 것의 기준이라는게 명확하지 않지만..


    그나저나 왜이리 모르는 용어가 많은 것일까..


    Check를 원한다면 다음의 사이트로~~

    http://web2.0validator.com/

'앙마의 날개 > 컴퓨터공학' 카테고리의 다른 글

프로세스와 쓰레드  (0) 2008.03.04
What Is Web 2.0  (0) 2007.04.05
나의 Blog는 Web 2.0에 얼마나 가까운가...  (2) 2006.11.26
Posted by 열라착한앙마

댓글을 달아 주세요

  1. BlogIcon 푸른밤의꿈 2006.11.27 12:39  댓글주소  수정/삭제  댓글쓰기

    내것도 똑같이 나오겠지??? ㅎㅎ