Home
Work in progress
News

Workshops
References (project)
Current References (general)
Project


CNAM
LATAPSES
EHESS
ENST

Meta-markets and consumers' networks


Consumers' interaction and networking

Curien N., Fauchart E, Lainé J., Laffond G., Lesourne J., Moreau F. [2001], "Forums de consommation sur Internet : un modèle évolutionniste", Revue Economique, numéro spécial "Economie de l’Internet", octobre 2001.

The purpose of the model is to analyze the genesis and the evolution of consumption oriented chat rooms in the Web space. We study an endogenous consumption dynamics in which individuals make their buying decisions on the basis of a threefold information: (i) their own consumption past experience (private information); (ii) and index of best sales (public information); (iii) advices that are available on evolutive Internet chat rooms (collective information). We discuss the properties of such a system as regards auto-organization: under which conditions chat rooms do they allow individual experiences to converge and aggregate into a genuine collective knowledge leading to the emergence of an auto-organized pattern of consumption ?

 

Cowan R., Cowan W., Swann P. [1997], "A Model of Demand with Interactions between Consumers", International Journal of Industrial Organization, 15(6): 711-732.

Moukas A., Guttman R., Zacharia G., Maes P. [1999], "Agent-mediated Electronic Commerce: An MIT Media Laboratory Perspective", MIT Working Paper (http://ecommerce.mit.edu/).

Urban G.L., Sultan F., Qualls W. [1999], "Design and Evaluation of Trust Based Advisor on the Internet", MIT Working paper http://ecommerce.mit.edu/forum/papers/ERF141.pdf.

Singh M.P., Bin Yu, Venkatraman M. [2001], "Community-Based Service Location: How Virtual Communities And Software Agents Can Be Used To Provide And Locate Trustworthy Services On The Internet", Communications of the ACM, April 2001/Vol. 44, No. 4

  • The social network stabilizes at an improved quality. The agents find neighbors with whom they can stay as long as their principals’ interests don’t change.
  • Giving and taking referrals has a significant payoff. When referrals are given, the quality of the system is higher than otherwise.
  • Giving consideration to others’ sociability improves the quality of the social network, but an overemphasis on sociability (at the cost of expertise) can hurt.
  • A new principal added to a stabilized social network will drift toward neighbors from which it obtains improved quality.

 

B. Ellickson B., Grodal B., Scotchmer S., Zame W. [1999], "Clubs and the Market", Econometrica 67 (1999), 1185-1218. 

http://socrates.berkeley.edu/~scotch/

This paper defines a general equilibrium model with exchange and club formation. Agents trade multiple private goods widely in the market, can belong to several clubs, and care about the characteristics of the other members of their clubs. The space of agents is a continuum, but clubs are finite. It is shown that (i) competitive equilibria exist, and (ii) the core coincides with the set of equilibrium states. The central subtlety is in modeling club memberships and expressing the notion that membership choices are consistent across the population. JEL Classifications D2, D5, H4 Keywords: Clubs, Continuum Models, Public Goods, Core, Club Equilibrium

 

Yong Seog Kim, W. Nick Streety, Gary J. Russellz, Filippo Menczery (Management Sciences Department : Marketing Department, University of Iowa)

Kim Y.S., Street N., Russell G., Menczer F. [2000], "Customer Targeting: A Neural Network Approach Guided by Genetic Algorithms", Submitted to Management Science Journal

One of the key problems in database marketing is the identification and profiling of households who are most likely to be interested in a particular product or service. Principal component analysis (PCA) of customer background information followed by logistic regression analysis of response behavior is commonly used by database marketers. In this paper, we propose a new approach that uses artificial neural networks (ANN's) guided by genetic algorithms (GA's) to target households. We show that the resulting selection rule is more accurate and more parsimonious than the PCA/logit rule when the manager has a clear decision criterion. Under vague decision criteria, the new procedure loses its advantage in interpretability, but is still more accurate than PCA/logit in targeting households.

 

Krishna Vijay, John Morgan [2000], "A Model of Expertise", Working Paper, http://www.princeton.edu/~rjmorgan/working.htm

We study a model in which perfectly informed experts offer advice to a decision maker whose actions affect the welfare of all. Experts are biased and thus may wish to pull the decisionmaker in different directions and to different degrees. When the decision maker consults only a single expert, the expert withholds substantial information from the decision maker. We ask whether this situation is improved by having the decision maker sequentially consult two experts. We first show that there is no perfect Bayesian equilibrium in which full revelation occurs. When both experts are biased in the same direction, it is never beneficial to consult both. In contrast, when experts are biased in opposite directions, it is always beneficial to consult both. Indeed, in this case full revelation may be induced in an extended debate by introducing the possibility of rebuttal.

 

Blinder Alan S., John Morgan [2000], "Are Two Heads Better Than One?: An Experimental Analysis Of Group vs. Individual Decisionmaking", Working Paper 7909 http://www.nber.org/papers/w7909

Two laboratory experiments (urn problem, monetary policy) were run to test the commonly-believed hypothesis that groups make decisions more slowly than individuals do. Surprisingly, this turns out not to be true. Furthermore, there is no significant difference in the decision lag when group decisions are made by majority rule versus when they are made under a unanimity requirement. In addition, group decisions are on average superior to individual decisions.


Communities of interest and cyber-conversations

Barry Wellman

Wellman B., Gulia M. [1999], "Net Surfers Don't Ride Alone: Virtual Community as Community", in Networks in the Global Village, ed. Barry Wellman, Boulder, CO: Westview Press, 1999.

Wellman B. [1999], Networks in the Global Village, Boulder, CO: Westview Press, 1999.

Wellman B. [1999], "The Network Community: An Introduction", in Networks in the Global Village, ed. Barry Wellman, Boulder, CO: Westview Press, 1999.

Hillery G. [1955], "Definitions of Community: Areas of Agreement", Rural Sociology 20, p.111-122

Community Networks - Community ties are narrow, specialized relationships, not broadly supportive ties. - People are not wrapped up in traditional densely knit, tightly bounded communities but are manuevering in sparsely knit, loosely bounded frequently changing networks. - Communities have moved out of neighborhoods to be dispersed networks that continue to be supportive and sociable. - Private intimacy has replaced public sociability. (North Americans go out to be private - in streets where no one greets each another - but they stay inside to be public - to meet their friends and relatives) - Communities have become domesticated and feminized.

 

Barry Wellman and Stephanie Potter

Wellman B., Potter S. [1999], "The Elements of Personal Communities", in Networks in the Global Village, ed. Barry Wellman, Boulder, CO: Westview Press, 1999.

Canada : in-person survey, conducted in 1968 with a random sample of 845 adult residents of East York. Relationships with a total of 3930 members.

 

Wellman B., Boase J., Chen W. [2002], "The Networked Nature of Community Online and Offline", IT & Society 1 (1), Summer, 2002: 151-165.

Abstract : Communities started changing from groups to networks well before the advent of the Internet. Initially, people believed that industrialization and bureaucratization would dissolve community groups and leave only isolated, alienated individuals. Then scholars discovered that communities continued, but more as sparsely-knit, spatially-dispersed social networks rather than as densely-knit, village-like local groups. A similar debate has developed about the impact of the Internet on community. Some fear that it will isolate people from face-to-face interactions. Others extol the Internet's ability to support far-flung communities of shared interest.

Evidence to address this debate about the impact of the Internet on community is thundering in. Three studies done at the NetLab are concomitant with general findings, both in North America and worldwide, that rather than weakening community, the Internet adds to existing face-to-face and telephone contact. Rather than increasing or destroying community, the Internet can best be seen transforming community such that it becomes integrated into rhythms of daily life, with life online integrated with offline activities.


 

Eric von Hippel's research

In past research, my students and I have explored the impact of profit expectations on the "functional" sources of innovation - user, manufacturer, supplier and other.

In current research, we are exploring how "sticky" information affects the sources of innovation. This work is built upon the finding that much information about user needs and about supplier solutions is very difficult to transfer from the place where it has been generated - is "sticky." For example, it is very difficult for a user to accurately and completely convey information about novel product or service needs to a supplier.

The impact of information stickiness, we find, is that product and service development tends to be driven to the site of the sticky information. (For example, when users really need a product and cannot say what they want (in part because they are actually interactively developing the need and the solution at the same time) they will tend to find it easier to develop the product themselves than to accurately and fully tell the supplier what they want.) 

http://web.mit.edu/evhippel/www/Publications.htm

Lakhani K., Hippel E. von [2000], "How Open Source software works: “Free” user-to-user assistance", MIT Sloan School of Management, Working Paper #4117

Open source software products represent the leading edge of innovation development and diffusion systems conducted for and by users themselves – no manufacturer required. Research into this phenomenon has so far focused on how the major tasks of software development are organized and motivated. But a complete user system requires the execution of “mundane but necessary” tasks as well. In this paper, we explore how the mundane but necessary task of field support for open source Apache server software is organized, and how and why users are motivated to participate in providing it. We find that the present system works well and that information providers are largely rewarded by benefits directly received from a related task. We also find, however, that the present help system is  by and for only a few – and that it changes would be needed if and as volume increases. General lessons for user-based innovation systems includes the clear willingness of users to openly reveal their proprietary information. This bodes well for the efficiency of useronly innovation systems, and is rational behavior if the information has low competitive value and/or if information providers think that other users know the same thing they do, and would reveal the information if they did not. 

Executive Summary

  • (1) About 25% of questions posted on CIWS-U are not getting any answers either by public postings or private email. The proportion of unanswered questions seems pretty constant over time. On surface inspection, we cannot see any difference in clarity or quality between the questions being answered and those not being answered.
  • (2) Questions that did get answered were generally answered quite quickly (about 50% by the next day).
  • (3) Re the value of answers received: The good news is (see the data below) that most people feel they are getting helped. The bad news is, 39% of "Other seekers" (generally people with less experience) say that the answers they received "did not solve their problem."
  • (4) Most questions on CIWS-U are answered by a few "frequent information providers." About the top 100 providers are responsible for 50% of the answers posted between 1995-9. Mark Slemko was the top hero with about 2,500 answers posted in the period 1995-9.
  • (5) In the CIWS-U "system" Questions and Answers are matched up by information PROVIDERS. They do this by finding questions they want to answer when reading or scanning CIWS-U. Their reading cost is high (many hours per year), but should not be charged to question-answering - Information providers say they are reading CIWS-U primarily to learn rather than to find questions to answer.
  • Issue this raises for Apache: Will information providers continue to answer if the amount they learn by trolling CIWS-U drops over time? A comment by Eric Raymond (Cathedral and Bazaar) on his experience with help from users of his open source program, fetchmail, is interesting in this regard. "Actually_ the list [of fetchmail beta-testers] is beginning to lose members from its high of close to 300 for an interesting reason. Several people have asked me to unsubscribe them because fetchmail is working so well for them that they no longer need to see the list traffic! Perhaps this is part of the normal life-cycle of a mature, bazaar-style project." (Raymond 1999, p. 46-7)
  • (6) After information providers have found a question they want to answer, the cost to actually answer is pretty low. Info providers report that it takes them only 1-5 minutes to answer a question - because they typically only post what they already know (in other words, they did not newly solve things or look up things to help information seekers). (Frequent providers are the more expert ones and generally create an answer in a minute or less.)
  • (7) A final puzzle: The number of questions posted on CIWS-U per month has not increased nearly as rapidly as the number of sites using Apache. (Average # of questions per month has only gone from about 200 4 years ago to about 400 per month now.) Has Apache gotten easier to use over time? If not, where are most people getting the help they need?

 

Hippel E. von [1998], Economics of Product Development by Users: Impact of “Sticky” Local Information, Management Science, vol 44, No. 5 (May) p. 629-644

Those who solve more of a given type of problem tend to get better at it - which suggests that problems of any given type should be brought to specialists for a solution. However, in this paper we argue that agency-related costs and information transfer costs will tend to drive the locus of problem-solving in the opposite direction - away from problem-solving by specialist suppliers, and towards those who directly benefit from a solution, and who have difficult-to-transfer local information about a particular application being solved, such as the direct users of a product or service. We examine the actual location of design activities in two fields in which custom products are produced by "mass-customization" methods: application-specific integrated circuits (ASICs) and computer telephony integration systems (CTI). In both, we find that users rather than suppliers are the actual designers of the application-specific portion of the product types examined. We offer anecdotal evidence that the pattern of userbased customization we have documented in these two fields is in fact quite general, and we discuss implications for research and practice.

 


 

Tyre M.J., Hippel E. von [1997],"The Situated Nature of Adaptive Learning in Organizations", Organization Science, vol 8, No 1 (January-February) p.71-83

This paper describes the nature of adaptive learning processes in organizations. We examine the process of problem solving around new manufacturing equipment following field tests and early factory use. We find that adaptation is a situated process, in that different organizational settings (1) contain different kinds of clues about the underlying issues, (2) expose learners to different ideas, and (3) offer different resources for generating and analyzing information. Consequently, actors frequently must move in an iterative fashion between different organizational settings before they can identify the causal underpinnings of a problem and select suitable solutions. This finding adds an additional dimension to the literature on adaptive learning processes, which currently focuses almost exclusively on social knowledge exchange (via discussion, argument, and collaborative "sensemaking"). We propose that theories of adaptive learning should also take into account how actors (both collectively and individually) use their surroundings to understand events, and how these surroundings affect the social interactions that unfold.


Marc Smith research

http://research.microsoft.com/~masmith/

Marc Smith is a Research Sociologist in the Collaborative and Multimedia Systems Group. I focus on the research and design of social cyberspaces. In particular I am interested in the emergence of social organizations like communities in online environments and the resources groups need in order to cooperate productively.

Fiore, Andrew, Scott Lee Teirnan, Marc Smith [2001], "Observed Behavior and Perceived Value of Authors in Usenet Newsgroups: Bridging the Gap", Working paper, 2001.

Fiore, Andrew and Marc Smith [2001], "Tree Map Visualizations of Newsgroups", Working paper, 2001.

Smith, Marc and Andrew Fiore [2001], "Visualization components for persistent conversations", in ACM SIG CHI 2001.

Smith, Marc [2000], "Some social implications of ubiquitous wireless networks", ACM Mobile Computing and Communications Review, April 2000, Vol.4 No. 2

Smith, Marc, JJ Cadiz, Byron Burkhalter [2000], "Conversation Trees and Threaded Chats", CSCW 2000.

Smith, Marc and Peter Kollock [1999], Communities in Cyberspace: Perspectives on New Forms of Social Organization, London, Routledge Press, 1999.

Smith, Marc [1999] "Invisible Crowds in Cyberspace: Measuring and Mapping the Social Structure of USENET" IN Communities in Cyberspace, edited by Marc Smith and Peter Kollock. London, Routledge Press, 1999

Smith, Marc [1992], "Voices from the WELL: The Logic of the Virtual Commons", Unpublished manuscript, 1992 Kollock, Peter, and Marc Smith [1999], "Introduction: Communities in Cyberspace." Pp. 3-25 IN Communities in Cyberspace, edited by Marc Smith and Peter Kollock. London: Routledge Press, 1999.

Kollock, Peter and Marc Smith [1996], "Managing the Virtual Commons: Cooperation and Conflict in Computer Communities", IN Computer-Mediated Communication, edited by S. Herring. Amsterdam: John Benjamins, 1996.

Smith, Marc, Shelly Farnham, Steven Drucker [2000], "The Social Life of Small Graphical Chats" in ACM SIG CHI 2000

Dave Vronay, Smith, Marc, Steven Drucker [1999], "Chat as a Streaming Media Type" in ACM UIST 1999

 

Netscan website : http://netscan.research.microsoft.com/ (Newsgroup tracker)


Cohen W.W., Fan W. [2000], "Web-Collaborative Filtering: Recommending Music by Crawling The Web", 9th World Wide Web Conference, Amsterdam, May 15 - 19, 2000.

Abstract : We show that it is possible to collect data that is useful for collaborative filtering (CF) using an autonomous Web spider. In CF, entities are recommended to a new user based on the stated preferences of other, similar users. We describe a CF spider that collects from the Web lists of semantically related entities. These lists can then be used by existing CF algorithms by encoding them as "pseudo-users". Importantly, the spider can collect useful data without pre-programmed knowledge about the format of particular pages or particular sites. Instead, the CF spider uses commercial Web-search engines to find pages likely to contain lists in the domain of interest, and then applies previously-proposed heuristics [Cohen, 1999] to extract lists from these pages. We show that data collected by this spider is nearly as effective for CF as data collected from real users, and more effective than data collected by two plausible hand-programmed spiders. In some cases, autonomously spidered data can also be combined with actual user data to improve performance.

 

Staab S., 2, Angele J., Decker S., Erdmann M., Hotho A., Maedche V, Schnurr H.P., Studer R., Sure Y. [2000], "Semantic Community Web Portals", 9th International World Wide Web Conference, Amsterdam, May 15 - 19, 2000.

http://www.aifb.uni-karlsruhe.de/WBS/ ; http://www.ontoprise.de

Abstract : Community web portals serve as portals for the information needs of particular communities on the web. We here discuss how a comprehensive and flexible strategy for building and maintaining a high-value community web portal has been conceived and implemented. The strategy includes collaborative information provisioning by the community members. It is based on an ontology as a semantic backbone for accessing information on the portal, for contributing information, as well as for developing and maintaining the portal. We have also implemented a set of ontology-based tools that have facilitated the construction of our show case - the community web portal of the knowledge acquisition community. Keywords: Web Portal; Collaboration; Ontology; Web Site Management; Information Integration

 

Velkovska Julia [2002], "L'intimité anonyme dans les conversations électroniques sur les webchats", Sociologie du Travail, Volume 44, Issue 2, April-June 2002.

The anonymous intimacy of electronic conversations in Webchats

Centre d'étude des mouvements sociaux EHESS et France Télécom Recherche et Développement, Laboratoire UCE, 38-40, rue du Général Leclerc, 92794 , Issy-les-Moulineaux, France

Résumé : Cet article analyse des interactions écrites se déroulant en temps réel sur internet dans le cadre des webchats. S'inspirant de l'ethnométhodologie et de la sociologie phénoménologique, il interroge les identités et les formes de relation qui s'y constituent. Comment les utilisateurs se dotent-ils d'une identité fondée sur la seule écriture électronique ? Comment des relations peuvent-elles se nouer, se développer, se maintenir durablement dans ce contexte et quelle en est la spécificité ? L'analyse articule deux types de données : des récits de pratique et des discussions enregistrées sur les chats. La première partie établit le lien entre le cadre spatio-temporel du dispositif de communication et la forme des interactions qui s'y déploient. La deuxième partie est consacrée aux typifications mises en uvre par les participants pour construire leurs identités et leurs relations. En conclusion, la forme des relations dans les chats est caractérisée par une tension entre les catégories d'intimité et d'anonymat.

Abstract : Using ethnomethodology and phenomenological sociology, this paper analyses identities and relations arising through real time written interactions in webchats. How do users constitute identities based only on electronic writing? How can relations form, develop and last in this context? What is specific about such electronic relations? Two kinds of data are analysed: interviews with active webchat users and discussions recorded in the chats. First of all, the paper establishes the link between the space-time framework of the communication device and the form of interactions taking place. Then, the typifications used by participants to construct their identities and relations are examined. In conclusion, the relations in the chats are characterized by a tension between intimacy and anonymity.

 

Bitouzet C., Soudoplatoff S. [2000], Les communautés d'intérêt à l'heure d'internet ou les barbares contre les rentiers, Revue Française de Marketing, N°177/178, p.119-137.


Games and virtual communities

Journées d'études "Internet, jeu et socialisation" les 5 et 6 décembre au GET

Olivier Galibert (GRESEC : Groupe de Recherche sur les enjeux de la communication, Université de Bretagne Sud)

Galibert O. [2002], "Vers une rationalisation marchande des 'communautés virtuelles'", présentation aux Journées d'études "Internet, jeu et socialisation", (5 et 6 décembre, GET).

 

The Hacker Ethic and the Spirit of the Information Age (Random House, 2001) by Pekka Himanen with Linus Torvalds and Manuel Castells

Himanen P. [2001], The Hacker Ethic and the Spirit of the Information Age, Random House, 2001.

Nearly a century ago, Max Weber's The Protestant Ethic and the Spirit of Capitalism articulated the animating spirit of the industrial age, the Protestant ethic. Now, Pekka Himanen - together with Linus Torvalds and Manuel Castells - articulates how hackers (in the original meaning of the word, hackers are enthusiastic computer programmers who share their work with others; they are not computer criminals) represent a new, opposing ethos for the information age. Underlying hackers' technical creations - such as the Internet and the personal computer, which have become symbols of our time - are the hacker values that produced them and that challenge us all. These values promoted passionate and freely rhythmed work; the belief that individuals can create great things by joining forces in imaginative ways; and the need to maintain our existing ethical ideals, such as privacy and equality, in our new, increasingly technologized society. The Hacker Ethic takes us on a journey through fundamental questions about life in the information age - a trip of constant surprises, after which out time and our lives can be seen from unexpected perspectives.

Himanen P. [2001], "A Brief History of Computer Hackerism", Working Paper. http://www.hackerethic.org/writings/hackerhistory.shtml

 

Cerf Vinton [1994], "Guidelines for Conduct on and Use of Internet", Internet Society http://www.isoc.org/internet/conduct/cerf-Aug-draft.shtml

Licklider J.C.R. [1960], "Man-Computer Symbiosis", IRE (IEEE) Transactions on Human Factors in Electronics, volume HFE-1, pages 4–11, March 1960.

http://memex.org/licklider.pdf

Summary : Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization,

Licklider J.C.R., Taylor R. [1968], "The Computer as a Communications Device", Science and Technology, April 1968.

But let us be optimistic. What will on-line interactive communities be like? In most fields they will consist of geographically separated members, sometimes grouped in small clusters and sometimes working individually. They will be communities not of common location, but of common interest. In each field, the overall community of interest will be large enough to support a comprehensive system of field-oriented programs and data. In each geographical sector, the total number of users—summed over all the fields of interest—will be large enough to support extensive generalpurpose information processing and storage facilities. All of these will be interconnected by telecommunications channels. The whole will constitute a labile network of networks—ever-changing in both content and configuration. What will go on inside? Eventually, every informational transaction of sufficient consequence to warrant the cost. Each secretary’s typewriter, each data-gathering instrument, conceivably each dictation microphone, will feed into the network. You will not send a letter or a telegram; you will simply identify the people whose files should be linked to yours and the parts to which they should be linked-and perhaps specify a coefficient of urgency. You will seldom make a telephone call; you will ask the network to link your consoles together, You will seldom make a purely business trip, because linking consoles will be so much more efficient. When you do visit another person with the object of intellectual communication, you and he will sit at a two-place console and interact as much through it as face to face. If our extrapolation from Doug Engelbart’s meeting proves correct, you will spend much more time in computer-facilitated teleconferences and much less en route to meetings.

 

Hagel J., Armstrong A.G. [1997], "Net Gain: Expanding Markets Through Virtual Communities", Harvard Business School Press.

Building relationships with customers has been a buzz phrase in many business circles for years. Now John Hagel and Arthur Armstrong declare that's not enough. They make a strong case that business success in the very near future will depend on using the Internet to build not just relationships, but communities. The payoff, they maintain, will be phenomenal customer loyalty and high profits. But, they warn, this race will definitely go to the swift. Here's a cyberspace book that could make your business future. Not everyone agrees with Hagel and Armstrong, but with stakes so high they deserves a serious reading.

 

Michel Marcoccia

Marcoccia M. [2001], "La commmunauté virtuelle : une communautés en paroles", in Actes du 3e Colloque International sur les Usages et Services de Télécommunications: e-usages, 12-14 juin 2001, Paris)

Marcoccia M. [2003], "On-line Polylogues: conversation structure and participation framework in Internet Newsgroups", Journal of Pragmatics (special issue on Polylogues, ed. C. Kerbrat-Orecchioni)

 

Howard Rheingold

http://www.rheingold.com/

Rheingold H. [1993], The Virtual Community - Homesteading on the Electronic Frontier, Addison Wesley, New-York.

Virtual communities are social aggregations that emerge from the Net when enough people carry on those public discussions long enough, with sufficient human feeling, to form webs of personal relationships in cyberspace.

Cyberspace, originally a term from William Gibson's science-fiction novel Neuromancer, is the name some people use for the conceptual space where words, human relationships, data, wealth, and power are manifested by people using CMC technology.

in chapter 5: Multi-User Dungeons and Alternate Identities

Narrative is the stuff MUDworlds are made of. Everyone and everything and every place has a story. Every object in a MUD, from your character's identity to the chair your character is sitting in, has a written description that is revealed when you choose to look at the object. The story is known in MUDspeke as "the description." If you have the authorization to do so, you could create a small brown mouse or purple mountain range or whatever else words can describe. Although the MUD worlds are fantasies, with no more tangible reality than the settings and characters in a novel or a soap opera, the people I've met in real life who live in MUDlands testify passionately that the feelings they have about their characters and worlds are real to them, and often quite intense.

In a conversation with the author in 1992, Richard Bartle said:

Losing your persona in a game is absolutely terrible. It's the worst thing that can happen to you and people really get put up about it. They usually say they're gutted. "Gutted" is the word players use because it's about the only one that describes about how awful it is. It's not as if "Oh dear, I've lost my persona" in the same way you may say "I've lost my shoe." It's not even "Oh dear, I've lost my persona" in the same way as "I've lost my pet hamster." It's more as "Oh dear, I've just died. That's me they've just killed!" It's not "Oh, I've lost all that work and all that time and effort." It's "I've just died, this is terrible! Oh my God, I'm dead! Empty!"

 

Rheingold H. [2002], Smart Mobs: The Next Social Revolution, , New-York. http://www.smartmobs.com/index.html

Smart mobs emerge when communication and computing technologies amplify human talents for cooperation. The impacts of smart mob technology already appear to be both beneficial and destructive, used by some of its earliest adopters to support democracy and by others to coordinate terrorist attacks. The technologies that are beginning to make smart mobs possible are mobile communication devices and pervasive computing - inexpensive microprocessors embedded in everyday objects and environments. Already, governments have fallen, youth subcultures have blossomed from Asia to Scandinavia, new industries have been born and older industries have launched furious counterattacks.

Street demonstrators in the 1999 anti-WTO protests used dynamically updated websites, cell-phones, and "swarming" tactics in the "battle of Seattle." A million Filipinos toppled President Estrada through public demonstrations organized through salvos of text messages.

The pieces of the puzzle are all around us now, but haven't joined together yet. The radio chips designed to replace barcodes on manufactured objects are part of it. Wireless Internet nodes in cafes, hotels, and neighborhoods are part of it. Millions of people who lend their computers to the search for extraterrestrial intelligence are part of it. The way buyers and sellers rate each other on Internet auction site eBay is part of it. Research by biologists, sociologists, and economists into the nature of cooperation offer explanatory frameworks. At least one key global business question is part of it - why is the Japanese company DoCoMo profiting from enhanced wireless Internet services while US and European mobile telephony operators struggle to avoid failure?

The people who make up smart mobs cooperate in ways never before possible because they carry devices that possess both communication and computing capabilities. Their mobile devices connect them with other information devices in the environment as well as with other people's telephones. Dirt-cheap microprocessors embedded in everything from box tops to shoes are beginning to permeate furniture, buildings, neighborhoods, products with invisible intercommunicating smartifacts. When they connect the tangible objects and places of our daily lives with the Internet, handheld communication media mutate into wearable remote control devices for the physical world.

Media cartels and government agencies are seeking to reimpose the regime of the broadcast era in which the customers of technology will be deprived of the power to create and left only with the power to consume. That power struggle is what the battles over file-sharing, copy-protection, regulation of the radio spectrum are about. Are the populations of tomorrow going to be users, like the PC owners and website creators who turned technology to widespread innovation? Or will they be consumers, constrained from innovation and locked into the technology and business models of the most powerful entrenched interests?

 

William Gibson: Neuromancer (1984) http://www.wsu.edu:8080/~brians/science_fiction/neuromancer.html#20

Neuromancer is historically significant. Most critics agree that it was not only the first cyberpunk novel, it was and remains the best. Gibson's rich stew of allusion to contemporary technology set a new standard for SF prose. If his plots and characters are shallow and trite, that mattered little, for it is not the tale but the manner of its telling that stands out. His terminology continues to pop up here and there. Whereas an earlier generation borrowed names from its favorite author, J. R. R. Tolkien, like "Shadowfax" (a new-age music group), "Gandalf" (a brand of computer data switch), and "Moria"; (an early fantasy computer game), there has been a proliferation of references to Neuromancer: there was a computer virus called " Screaming Fist," the Internet is commonly referred to as "Cyberspace" or--occasionally--"the Matrix," and there are several World Wide Web sites are named "Wintermute." (The rock group named "The Meat Puppets" existed before Gibson borrowed the term.) Gibson produced his vision in a time when many people were becoming haunted by the idea of urban decay, crime rampant, corruption everywhere. Just as readers of the 50s looked obsessively for signs that Orwell's Nineteen-Eighty-Four was coming true, some readers keep an eye out for the emergence of cyberpunk's nightmare world in contemporary reality. The fiction may not be widely read, but through movies and comics it has created one of the defining mythologies of our time.

Jaulin R., Weil F. [2002], "Évolution des jeux et des joueurs, panorama d'une pratique", présentation aux Journées d'études "Internet, jeu et socialisation", (5 et 6 décembre, GET).

MMORPG : Massively Multiplayer Online Role Playing Game

MUD : Multi-User Dungeons

  • Everquest : 82% des joueurs appartiennent à une guilde
  • temps moyen hebdomadaire : 20 heures
  • hardcoregamers (2% des joueurs) : 10 heures par jour
  • joueurs occasionnels (13% des joueurs) : moins de dix heures par semaine
Nick Yee : www.nickyee.com


Internet and Meta-Markets

 

Michael R. Baye, Indiana University

John Morgan, Princeton University

Baye M.R., Morgan J. [2001], " Information Gatekeepers on the Internet and the Competitiveness of Homogeneous Product Markets ", American Economic Review, Vol. 91, No. 3, June 2001.

We examine the equilibrium interaction between a market for price information (controlled by a gatekeeper) and the homogenous product market it serves. The gatekeeper charges fees to firms that advertise prices on its Internet site and to consumers who access the list of advertised prices. Gatekeeper profits are maximized in an equilibrium where (a) the product market exhibits price dispersion; (b) access fees are sufficiently low that all consumers subscribe; (c) advertising fees exceed socially optimal levels, thus inducing partial firm participation; and (d) advertised prices are below unadvertised prices. Introducing the market for information has ambiguous social welfare effects. (JEL D4, D8, M3, L13)

Baye M.R., Morgan J. [2001], "Information Gatekeepers and Price Discrimination on the Internet", working paper, http://php.indiana.edu/~mbaye/wpapers.htm.

In a recent paper, Baye and Morgan (2001) show that, when a monopoly gatekeeper controls a price listing service and homogeneous product firms cannot price discriminate, prices listed at the gatekeepers site are dispersed but lower than at brick-and-mortar establishments with probability one. We show that a similar result holds when firms can price discriminate between consumers purchasing through the gatekeeper’s site and those who do not. (JEL Numbers: D4, D8, M3, L13. Keywords: Internet, Price Dispersion, Price Discrimination)

 

Baye Michael R. and John Morgan [2001], "Price Dispersion in the Lab and on the Internet: Theory and Evidence", Working Paper, http://php.indiana.edu/~mbaye/wpapers.htm

Price dispersion is ubiquitous in settings that closely approximate textbook Bertrand competition. We show (Propositions 1 and 2) that only a little bounded rationality among sellers is needed to rationalize such dispersion. A variety of statistical tests, based on data sets from two independent laboratory experiments and structural estimates of the parameters of our models, suggest that bounded rationality based theories of price dispersion organize the data remarkably well. Evidence is also presented which suggests that the models are consistent with data from a leading Internet price comparison site. JEL Numbers: D43, C72 Keywords: Bertrand, Quantal Response Equilibrium, Epsilon Equilibrium.

 

Morgan John, Martin Sefton [2001], "A Model of Sales: Comment", Working Paper, http://www.princeton.edu/~rjmorgan/working.htm

We show that in the model of Varian (1980) an increase in the number of uninformed consumers always raises the expected price paid by informed consumers. This contradicts the claims made in Varian (1980, 1981).

Baye Michael R. and John Morgan [2001], "Revisiting Bertrand's Competition: Paradox Lost or Paradox Found?", Working Paper, http://www.princeton.edu/~rjmorgan/working.htm

In Bertrand's competitions there exist mixed-strategies with positive profits at the Nash equilibrium. That could explain why prices generally exceed marginal cost and tends to fluctuate randomly.

 

Baye Michael R., John Morgan, Patrick Scholten [2001], "Price Dispersion in the Small and in the Large: Evidence from an Internet Price Comparison Site", Working Paper, http://php.indiana.edu/~mbaye/wpapers.htm

This paper examines 4 million price observations over an eight month time period for 1000 of the best-selling consumer electronics products found on the price comparison site Shopper.com. We find that observed levels of price dispersion vary systematically with the number of firms listing price quotes for a given product. For example, for products where only two firms list prices, the gap between their prices averages 22 percent. In contrast, for products where 17 firms list prices (the average in our sample), the gap is only about 3.5 percent. Further, we find little support for the notion that prices on the Internet are converging to the “law of one price.” The average range in prices was about 40 percent, and the average gap between the two lowest prices listed for a given product remained stable at around 5 percent. We show that the combination of stable and ubiquitous price dispersion, coupled with dispersion that differs in the small and in the large, is consistent with a number of theoretical models of equilibrium price dispersion. JEL Numbers: D4, D8, M3, L13. Keywords: Bertrand Competition, Internet, Law of One Price, Price Dispersion.

Eric K. Clemons and Lorin M. Hitt: Department of Operations and Information Management, The Wharton School, University of Pennsylvania

Il-Horn Hann: Graduate School of Industrial Administration, Carnegie Mellon University

Clemons Eric K., Il-Horn Hann, Lorin M. Hitt [2001], "Price Dispersion and Differentiation in On-Line Travel: An Empirical Investigation", Management Science (forthcoming)

Previous research has examined whether price dispersion exists in theoretically highly efficient Internet markets. However, much of the previous work has been focused on industries with low cost and undifferentiated products. In this paper, we examine the presence of price dispersion and product differentiation using data on the airline ticket offerings of online travel agents (OTAs). We find that different OTAs offer tickets with substantially different prices and characteristics when given the same customer request. Some of this variation appears to be due to product differentiation -- different OTAs specialize by systematically offering different tradeoffs between ticket price and ticket quality (minimizing the number of connections, matching requested departure and return time). However, even after accounting for differences in ticket quality, ticket prices vary by as much as 18% across OTAs. In addition, OTAs return tickets that are strictly inferior to the ticket offered by another OTA for the same request between 2.2% and 28% of the time. Overall, this suggests the presence of both price dispersion and product differentiation in the online travel market.

 

Morgan John, Henrik Orzen and Martin Sefton [2001], "An Experimental Study of Price Dispersion", working paper, http://www.princeton.edu/~rjmorgan/working.htm

Price comparison sites have become an increasingly popular way to shop online. Yet, even though consumers have complete access to the list of prices for apparently identical products offered on these sites, persistent price dispersion has been widely observed. One important theoretical explanation for this phenomenon comes from clearinghouse models of price dispersion. These models predict that price dispersion arises because of consumer heterogeneities – some consumers are “informed” and simply buy from the firm offering the lowest price while the remaining consumers are “captive” and shop based on considerations other than price. Using a simple clearinghouse model, we derive testable comparative static implications of changes in market structure on equilibrium pricing. We show that an increase in the fraction of informed consumers leads to more competitive pricing for all consumers. Further, we show that when more firms enter the market, prices to informed consumers become more competitive, but prices to captive customers become less competitive. We then assess these implications in a laboratory experiment. Despite some discrepancies between predicted and pricing behavior, we find strong support for the comparative static predictions derived above. Keywords: Clearinghouse, Internet, Experiments, Price Dispersion JEL Classification Numbers: C72, C92


Peer-to-peer development

Resnick Paul, Richard Zeckhauser [2001], "Trust Among Strangers in Internet Transactions: Empirical Analysis of eBay’s Reputation System", Working Paper

Reputations that are transmitted from person to person can deter moral hazard and discourage entry by bad types in markets where players repeat transactions but rarely with the same player. On the Internet, information about past transactions may be both limited and potentially unreliable, but it can be distributed far more systematically than the informal gossip among friends that characterizes conventional marketplaces. One of the earliest and best known Internet reputation systems is run by eBay, which gathers comments from buyers and sellers about each other after each transaction. Examination of a large data set from 1999 reveals several interesting features of this system, which facilitates many millions of sales each month.

  • First, despite incentives to free ride, feedback was provided more than half the time.
  • Second, well beyond reasonable expectation, it was almost always positive.
  • Third, reputation profiles were predictive of future performance. However, the net feedback scores that eBay displays encourages Pollyanna (disproportionately positive) assessments of reputations, and is far from the best predictor available.
  • Fourth, although sellers with better reputations were more likely to sell their items, they enjoyed no boost in price, at least for the two sets of items that we examined.
  • Fifth, there was a high correlation between buyer and seller feedback, suggesting that the players reciprocate and retaliate.

Giorgos Zacharia, Alexandros Moukas, Pattie Maes MIT Media Laboratory

Zacharia G., Moukas A., Maes P. [1999], Collaborative Reputation Mechanisms in Electronic Marketplaces, Proceedings of the 32nd Hawaii International Conference on System Sciences.

The members of electronic communities are often unrelated to each other, they may have never met and have no information on each other's reputation. This kind of information is vital in Electronic Commerce interactions, where the potential counterpart's reputation can be a significant factor in the negotiation strategy. This paper proposes two complementary reputation mechanisms that rely on collaborative rating and personalized evaluation of the various ratings assigned to each user. While these reputation mechanisms are developed in the context of electronic commerce, we believe that they may have applicability in other types of electronic communities such as chatrooms, newsgroups,..

 

Dellarocas Chrysanthos [2001], "Building Trust On-Line: The Design of Reliable Reputation Reporting: Mechanisms for Online Trading Communities", MIT Sloan School of Management Working Paper No. 4180-01, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=289967

Several properties of online interaction are challenging the accumulated wisdom of trading communities on how to produce and manage trust. Online reputation reporting systems have emerged as a promising trust management mechanism in such settings. The objective of this paper is to contribute to the construction of online reputation reporting systems that are robust in the presence of unfair and deceitful raters. The paper sets the stage by providing a critical overview of the current state of the art in this area. Following that, it identifies a number of important ways in which the reliability of the current generation of reputation reporting systems can be severely compromised by unfair buyers and sellers. The central contribution of the paper is a number of novel "immunization mechanisms" for effectively countering the undesirable effects of such fraudulent behavior. The paper describes the mechanisms, proves their properties and explains how various parameters of the marketplace microstructure, most notably the anonymity and authentication regimes, can influence their effectiveness. Finally, it concludes by discussing the implications of the findings for the managers and users of current and future electronic marketplaces and identifies some important open issues for future research.

 

Bresee, J.S., Heckerman, D., and Kadie, C. [1998], "Empirical Analysis of Predictive Algorithms for Collaborative Filtering". In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI-98), pp. 43-52, San Francisco, July 24-26, 1998.

 

Paul Resnick The University of Michigan School of Information: presnick@umich.edu

Lorrie Faith Cranor AT&T Labs-Research: lorrie@research.att.com

Cranor, L.F. and Resnick, P. [2000], "Protocols for Automated Negotiations with Buyer Anonymity and Seller Reputations", Netnomics 2(1):1-23. http://lorrie.cranor.org/

ABSTRACT : An automated negotiation protocol defines the rules of a bargaining game between automated agents – for example a negotiation between a Web site and a Web browser.  The most general protocol would permit an unlimited number of rounds of offers and counter-offers, with offers being either commitments or cheap talk. A more restricted protocol, with sellers making take-it-or-leave-it initial offers, would be easier to implement and might focus players on simpler bargaining strategies, easing analysis of opponents’ strategies and perhaps reducing the occurrence of irrational strategy choices due to miscalculations. We consider possible downsides to the adoption of a more restricted protocol: might buyers or sellers, or both, get worse outcomes with a more restricted protocol than with a more general bargaining protocol?

In many Internet commerce applications buyers can easily achieve anonymity, limiting what a seller can learn about any buyer individually.  However, because sellers need to keep a fixed web address, buyers can probe them repeatedly or pool their information about sellers with the information obtained by other buyers; hence, sellers’ strategies become public knowledge. Under assumptions of buyer anonymity, publicly known seller strategies, and no negotiation transaction costs for buyers, we find that a restricted protocol will yield the same equilibrium outcomes as a more complicated bargaining protocol. As we relax those assumptions, however, we find that sellers, and in some cases buyers as well, may benefit from a more general bargaining protocol.

The paper is motivated by the problem of designing a protocol for a Web browser to negotiate with a Web server about what information will be revealed to the Web site and how that information will be used. Such a protocol will be part of the Platform for Privacy Preferences Project (P3P), currently being developed by the World Wide Web Consortium.

 

Friedman, E. and P. Resnick [2001], "The Social Cost of Cheap Pseudonyms", Journal of Economics and Management Strategy, 10(2): 173-199.

Abstract : On the Internet it is easy for someone to obtain a new identity. This introduces opportunities to misbehave without paying reputational consequences. A large degree of cooperation can still emerge, through a convention in which newcomers ``pay their dues'' by accepting poor treatment from players who have established positive reputations. One might hope for an open society where newcomers are treated well, but there is an inherent social cost in making the spread of reputations optional. We prove that no equilibrium can sustain significantly more cooperation than the dues-paying equilibrium in a repeated random matching game in which players have finite lives and the ability to change their identities, and there is a small but nonvanishing probability of mistakes and a large number of players. Although one could remove this inefficiency by disallowing anonymity, this is not practical or desirable in a wide variety of transactions. We discuss the use of entry fees, which permit newcomers to be trusted but exclude some players with low payoffs, thus introducing a different inefficiency. We also discuss the use of unchangeable pseudonyms, and describe a mechanism which implements them using standard encryption techniques.

 

Kollock, P. [1999], "The Production of Trust in Online Markets", In Advances in Group Processes (Vol. 16), eds. E.J. Lawler, M. Macy, S. Thyne, and H.A. Walker, Greenwich, CT: JAI Press.

Resnick, P. and Varian, H.R. [1997], "Recommender Systems", Communications of the ACM, Vol. 40 (3), pp. 56-58.

Sarwar, B. M., Karypis, G., Konstan, J. A., and Riedl, J. [2000], "Application of Dimensionality Reduction in Recommender System - A Case Study", In ACM WebKDD 2000 Web Mining for E-Commerce Workshop.

Schmalensee, R. [1978], "Advertising and Product Quality", Journal of Political Economy, Vol. 86, pp. 485-503.

Schafer, J.B., Konstan, J., and Riedl, J. [2001], "Electronic Commerce Recommender Applications", Journal of Data Mining and Knowledge Discovery, January, 2001.

Shapiro, C. [1982], "Consumer Information, Product Quality, and Seller Reputation", Bell Journal of Economics 13 (1), pp 20-35, Spring 1982.

Shardanand, U. and Maes, P. [1995], "Social information filtering: Algorithms for automating “word of mouth”", In Proceedings of the Conference on Human Factors in Computing Systems (CHI95), Denver, CO, pp. 210-217.

 

Ghosh R.A. [1998], "Cooking pot markets : an economic model of the trade in free goods and services on the Internet", First Monday, http://www.firstmonday.org/issues/issue3_3/ghosh/index.html

It has long been assumed that there is something beyond economics involved in the proliferation of free goods and services on the Internet. Although Netscape's recent move to give away the source code for its browser shows that the corporate world now believes that it is possible to make money with free software - previously eyed with cautious pessimism - money is not the prime motivator of most producers of the Internet's free goods, and neither is altruism. Efforts and rewards may be valued in intangibles, but, as this paper argues, there is a very tangible market dynamics to the free economy of the Internet, and rational economic decisions are at work. This is the "cooking-pot" market: an implicit barter economy with asymmetric transactions.

The key here is the value placed on diversity [23], so that multiple copies of a single product add little value - marginal utility is near zero - but single copies of multiple products are, to a single user, of immense value. If a sufficient number of people put in free goods, the cooking pot clones them for everyone, so that everyone gets far more value than was put in.

An explicit monetary transaction - a sale of a software product - is based on what is increasingly an economic fallacy that each single copy of a product has marginal value. In contrast, the cooking-pot market rightly allocates resources on the basis of where consumers see value to be, in each distinct product.

 

Zelizer Viviana A. [2001], "Circuits of Commerce", Princeton University

Any commercial circuit includes four elements:

  1. it has a well-defined boundary with some control over transactions crossing the boundary
  2. a distinctive set of transfers of goods, services, or claims upon them occurs within the ties
  3. those transfers employ distinctive media
  4. ties among participants have some shared meaning

In combination, these four elements imply the presence of an institutional structure that reinforces credit, trust and reciprocity.

Corporate circuits, local monies, and intimate circuits obviously differ in their settings and contents. We should resist, however, the ever-present temptation to array them along a standard continuum from genuine, general, impersonal markets at one end, to non-market intimacy, at the other. To do so would reconstruct the very Gesellshaft/Gemeinschaft di-chotomies a clear recognition of circuits helps us escape. In all three types of circuits we find intense interpersonal ties commingling with regularized media and transfers. In all three, for that matter, we find ties that vary greatly in their intensity, scope, and durability. Differences among the three types of circuits depend not on overall extent of rationaliza-tion or solidarity but on variable configurations of media, transfers, interpersonal ties, and shared meanings attached to their intersection.

Peter Biddle, Paul England, Marcus Peinado, and Bryan Willman (Microsoft Corporation)

Biddle P., England P., Peinado M., Willman B., [2002], The Darknet and the Future of Content Distribution,

http://crypto.stanford.edu/DRM2002/darknet5.doc

Abstract : We investigate the darknet – a collection of networks and technologies used to share digital content. The darknet is not a separate physical network but an application and protocol layer riding on existing networks. Examples of darknets are peer-to-peer file sharing, CD and DVD copying, and key or password sharing on email and newsgroups. The last few years have seen vast increases in the darknet’s aggregate bandwidth, reliability, usability, size of shared library, and availability of search engines. In this paper we categorize and analyze existing and future darknets, from both the technical and legal perspectives. We speculate that there will be short-term impediments to the effectiveness of the darknet as a distribution mechanism, but ultimately the darknet-genie will not be put back into the bottle. In view of this hypothesis, we examine the relevance of content protection and content distribution architectures.

Digital Rights Management finally declared pointless: A recent Microsoft study that concludes that Digital Rights Management (DRM) will do little to thwart the piracy of digital entertainment. DRM, Peter Biddle, Paul England, Marcus Peinado and Bryan Willman note, is largely pointless and may do more to damage the cartel's bottom line than to bolster it. "From the point of view of economic theory, this has profound implications for business strategy: for example, increased security (e.g. stronger DRM systems) may act as a disincentive to legal commerce," the researchers write. "Consider an MP3 file sold on a web site: this costs money, but the purchased object is as useful as a version acquired from the darknet [Exp: A theoretical P2P distribution system]. However, a securely DRM-wrapped song is strictly less attractive: although the industry is striving for flexible licensing rules, customers will be restricted in their actions if the system is to provide meaningful security. This means that a vendor will probably make more money by selling unprotected objects than protected objects. In short, if you are competing with the darknet, you must compete on the darknet's own terms: that is convenience and low cost rather than additional security."

http://www.siliconvalley.com/mld/siliconvalley/business/columnists/gmsv/4609917.htm

 

Mihajlo A. Jovanovic, Fred S. Annexstein, Kenneth A. Berman ECECS Department, University of Cincinnati, Cincinnati, OH 45221

Jovanovic M.A., Annexstein F.S., Berman K.A. [2001], "Scalability Issues in Large Peer-to-Peer Networks - A Case Study of Gnutella", University of Cincinnati Technical Report.

http://www.ececs.uc.edu/~mjovanov/Research/paper.html

Abstract: With the peer-to-peer model quickly emerging as a computing paradigm of the future, there is an ever-increasing need for distributed algorithm that would allow peer-to-peer applications to scale to a large community of users. The main difficulty in designing such algorithms is that currently, very little is knows about the nature of the network topology on which these algorithms would be operating. The end result is that even simple protocols, as in the case of Gnutella, result in complex interactions that directly affect the overall system's performance. In this paper we present a case study of Gnutella- a worldwide, distributed information sharing system. To study the Gnutella network topology, we implemented a distributed computing application that allows topology discovery to be performed in constant time - an important feature considering Gnutella's highly dynamic nature. Upon analyzing the obtained topology data, we discovered that it exhibits strong small-world properties. In addition, we observed a power-law distribution with regards to node degrees, a property previously reported in other technological networks such as the Internet and the WWW. We believe these properties of the network topology are an important step toward an accurate mathematical model and could serve as an aid in both analyzing the performance of existing algorithms and designing new, more scalable, solutions.