Home
Work in progress
News

Workshops
References (project)
Current References (general)
Project


CNAM
LATAPSES
EHESS
ENST

Open source, copyright and patents

 

 


At http://www.idei.asso.fr/French/FPresent/index.html

you can have access to the papers of the following conferences on Internet and software economics:

  • Second conference on "The Economics of the Software and Internet Industries" Toulouse (France), January 17-18, 2003
  • "Open Source Software: Economics, Law and Policy" Toulouse (France), June 20-21, 2002
  • "GREMAQ workshop on the Economics of Internet and Innovation" Toulouse (France), June 19, 2002
  • "The Economics of the Software and Internet Industries" Toulouse (France), January 18-20, 2001 Programme List of Participants Papers

 

For a description, in French, of the software ideology, see:

Patrice Flichy P. [2001], L'imaginaire d'Internet, Editions de la Découverte, Paris, 2001.

http://barthes.ens.fr/colloque99/flichy.html


Free Software Foundation

http://www.gnu.org/

The GNU Project was launched in 1984 (by Richard M. Stallman) to develop a complete Unix-like operating system which is free software: the GNU system. (GNU is a recursive acronym for ``GNU's Not Unix''; it is pronounced "guh-NEW".) Variants of the GNU operating system, which use the kernel Linux, are now widely used; though these systems are often referred to as "Linux'', they are more accurately called GNU/Linux systems.

The philosophy and history of the GNU project is featured in Richard M. Stallman's article The GNU Project and in several other texts.

The FSF supports the freedoms of speech, press, and association on the Internet, the right to use encryption software for private communication, and the right to write software unimpeded by private monopolies.

En France, APRIL Association Pour la Promotion et la Recherche en Informatique Libre (http://www.april.org/), organisation associée de la FSF France, Chapitre français de la Free Software Foundation Europe


The GNU Project by Richard Stallman : (http://www.gnu.org/gnu/thegnuproject.html)

 

The first software-sharing community : the MIT Artificial Intelligence Lab in 1971.

The collapse of the community : When the AI lab bought a new PDP-10 in 1982, its administrators decided to use Digital's non-free timesharing system instead of ITS. The modern computers of the era, such as the VAX or the 68020, had their own operating systems, but none of them were free software: you had to sign a nondisclosure agreement even to get an executable copy. This meant that the first step in using a computer was to promise not to help your neighbor. A cooperating community was forbidden. The rule made by the owners of proprietary software was, "If you share with your neighbor, you are a pirate. If you want any changes, beg us to make them."

Free as in freedom:

The term "free software" is sometimes misunderstood--it has nothing to do with price. It is about freedom. Here, therefore, is the definition of free software: a program is free software, for you, a particular user, if:

  • You have the freedom to run the program, for any purpose.
  • You have the freedom to modify the program to suit your needs. (To make this freedom effective in practice, you must have access to the source code, since making changes in a program without having the source code is exceedingly difficult.)
  • You have the freedom to redistribute copies, either gratis or for a fee.
  • You have the freedom to distribute modified versions of the program, so that the community can benefit from your improvements.

Copyleft and the GNU GPL: The goal of GNU was to give users freedom, not just to be popular. So we needed to use distribution terms that would prevent GNU software from being turned into proprietary software. The method we use is called "copyleft". Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free.

Secret hardware: Hardware manufactures increasingly tend to keep hardware specifications secret. This makes it difficult to write free drivers so that Linux and XFree86 can support new hardware. We have complete free systems today, but we will not have them tomorrow if we cannot support tomorrow's computers. There are two ways to cope with this problem. Programmers can do reverse engineering to figure out how to support the hardware. The rest of us can choose the hardware that is supported by free software; as our numbers increase, secrecy of specifications will become a self-defeating policy. Reverse engineering is a big job; will we have programmers with sufficient determination to undertake it? Yes--if we have built up a strong feeling that free software is a matter of principle, and non-free drivers are intolerable. And will large numbers of us spend extra money, or even a little extra time, so we can use free drivers? Yes, if the determination to have freedom is widespread.

Software patents: The worst threat we face comes from software patents, which can put algorithms and features off limits to free software for up to twenty years. The LZW compression algorithm patents were applied for in 1983, and we still cannot release free software to produce proper compressed GIFs. In 1998, a free program to produce MP3 compressed audio was removed from distribution under threat of a patent suit.


John Perry Barlow and the "Declaration of the Independence of Cyberspace"

http://www.eff.org/~barlow/

A Declaration of the Independence of Cyberspace

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear. (…)

Davos, Switzerland, February 8, 1996


Open Source Initiative OSI

 

http://www.opensource.org/

The basic idea behind open source is very simple: When programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, people fix bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing. We in the open source community have learned that this rapid evolutionary process produces better software than the traditional closed model, in which only a very few programmers can see the source and everybody else must blindly use an opaque block of bits.

Open Source Initiative (OSI) is a non-profit corporation dedicated to managing and promoting the Open Source Definition for the good of the community, specifically through the OSI Certified Open Source Software certification mark and program.

Open source label and the 1998 session: The "open source" label itself came out of a strategy session held on February 3rd 1998 in Palo Alto, California. The people present included Todd Anderson, Chris Peterson (of the Foresight Institute), John "maddog" Hall and Larry Augustin (both of Linux International), Sam Ockman (of the Silicon Valley Linux User's Group), and Eric Raymond.

Raymond E. [1999a], The Cathedral and the Bazaar

  • 1. Every good work of software starts by scratching a developer's personal itch.
  • 2. Good programmers know what to write. Great ones know what to rewrite (and reuse).
  • 3. ``Plan to throw one away; you will, anyhow.'' (Fred Brooks, ``The Mythical Man-Month'', Chapter 11)
  • 4. If you have the right attitude, interesting problems will find you.
  • 5. When you lose interest in a program, your last duty to it is to hand it off to a competent successor.
  • 6. Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
  • 7. Release early. Release often. And listen to your customers.
  • 8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
  • 9. Smart data structures and dumb code works a lot better than the other way around.
  • 10. If you treat your beta-testers as if they're your most valuable resource, they will respond by becoming your most valuable resource.
  • 11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.
  • 12. Often, the most striking and innovative solutions come from realizing that your concept of the problem was wrong.
  • 13. ``Perfection (in design) is achieved not when there is nothing more to add, but rather when there is nothing more to take away.''
  • 14. Any tool should be useful in the expected way, but a truly great tool lends itself to uses you never expected.
  • 15. When writing gateway software of any kind, take pains to disturb the data stream as little as possible -- and *never* throw away information unless the recipient forces you to!
  • 16. When your language is nowhere near Turing-complete, syntactic sugar can be your friend.
  • 17. A security system is only as secure as its secret. Beware of pseudo-secrets.
  • 18. To solve an interesting problem, start by finding a problem that is interesting to you.
  • 19: Provided the development coordinator has a medium at least as good as the Internet, and knows how to lead without coercion, many heads are inevitably better than one.


Raymond E. [1999b], The magic Cauldron, http://tuxedo.org/~esr/writings/

We must first demolish a widespread folk model that interferes with understanding. Over every attempt to explain cooperative behavior there looms the shadow of Garret Hardin's Tragedy of the Commons.

Hardin famously asks us to imagine a green held in common by a village of peasants, who graze their cattle there. But grazing degrades the commons, tearing up grass and leaving muddy patches, which re-grow their cover only slowly. If there is no agreed-on (and enforced!) policy to allocate grazing rights that prevents overgrazing, all parties' incentives push them to run as many cattle as quickly as possible, trying to extract maximum value before the commons degrades into a sea of mud.

Most people have an intuitive model of cooperative behavior that goes much like this. It's not actually a good diagnosis of the economic problems of open-source, which are free-rider (underprovision) rather than congested-public-good (overuse). Nevertheless, it is the analogy I hear behind most off-the-cuff objections.

The tragedy of the commons predicts only three possible outcomes. One is the sea of mud. Another is for some actor with coercive power to enforce an allocation policy on behalf of the village (the communist solution). The third is for the commons to break up as village members fence off bits they can defend and manage sustainably (the property-rights solution).

When people reflexively apply this model to open-source cooperation, they expect it to be unstable with a short half-life. Since there's no obvious way to enforce an allocation policy for programmer time over the internet, this model leads straight to a prediction that the commons will break up, with various bits of software being taken closed-source and a rapidly decreasing amount of work being fed back into the communal pool.

In fact, it is empirically clear that the trend is opposite to this. The breadth and volume of open-source development (as measured by, for example, submissions per day at Metalab or announcements per day at freshmeat.net) is steadily increasing. Clearly there is some critical way in which the ``Tragedy of the Commons'' model fails to capture what is actually going on.


Open source, commons and code as law

Lawrence Lessig (Stanford University), http://cyberlaw.stanford.edu/lessig/

Lessig Lawrence [2000], Open Code and Open Societies, Values of Internet Governance, Lecture at "Free Software," Tutzing, Germany, June 1, 2000

First, think a bit more about code—about the way that code regulates. Lawyers don’t like to think much about how code regulates. Lawyers like to think about how law regulates. Code, lawyers like to think, is just the background condition against which laws regulate. But this misses an important point. The code of cyberspace - whether the Internet, or a net within the Internet - defines that space. It constitutes that space. And as with any constitution, it builds within itself a set of values and possibilities that governs life there. The Internet as it was in 1995 was a space that made it very hard to verify who someone was; that meant it was a space that protected privacy and anonymity. The Internet as it is becoming is a space that will make it very easy to verify who someone is; commerce likes it that way; that means it will become a space that doesn’t necessarily protect privacy and anonymity. Privacy and anonymity are values, and they are being respected, or not, because of the design of code. And the design of code is something that people are doing. Engineers make the choices about how the world will be. Engineers in this sense are governors.

For there are two values that are central to the practice of the Internet’s governance—values that link to the Open Source Movement and values that link as well to governance in real space. Let me describe these two and then develop with them some links to stuff we know about real space.

The first of these values we could call the value of Open-Evolution, and we could define it like this: Build a platform, or set of protocols, so that it can evolve in any number of ways; don’t play god; don’t hardwire any single path of development; don’t build into it a middle that can meddle with its use. Keep the core simple and let the application (or end) develop the complexity. there are principles the system must preserve. One is an ideal of network design, first described at MIT, called “end-to-end.”24 The basic intuition is that the conduit be general; complexity should be at the end of the project. As Jerome Saltzer describes it, “don’t force any service, feature, or restriction on the customer; his application knows best what features it needs, and whether or not to provide those features itself.”25 The consequence of this end-to-end design is that the network is neutral about how the network gets used.

This is the ideal of modularity. Code can be written in any number of ways; good code can be written in only one way. Good code is code that is modular, and that reveals its functions and parameters transparently. Why? The virtue is not efficiency. It is not that only good code will run. Rather, transparent modularity permits code to be modified; it permits one part to be substituted for another. The code then is open; the code is modular; chunks could be removed and substituted for something else; many forks, or ways that the code could develop, are possible.

Lessig L. [2000], "The Death of Cyberspace", Washington & Lee Law Review.

The Death of Cyberspace: Cyberspace killed the 1970s. It made creativity and innovation accessible to millions. This short book argues that law is bringing the 1970s back. Through the explosion of intellectual property rights, we are returning to the time when only the large were permitted to innovate. The book examines writing, music (both performance and composing), film, and inventing.

Lessig L. [2001], The Future of Ideas, Random House, October 2001

In The Future of Ideas, Lawrence Lessig explains how the Internet revolution has produced a counter-revolution of devastating power and effect. The explosion of innovation we have seen in the environment of the Internet was not conjured from some new, previously unimagined technological magic; instead, it came from an ideal as old as the nation. Creativity flourished there because the Internet protected an innovation commons. The Internet’s very design built a neutral platform upon which the widest range of creators could experiment. The legal architecture surrounding it protected this free space so that culture and information–the ideas of our era–could flow freely and inspire an unprecedented breadth of expression. But this structural design is changing - both legally and technically. This shift will destroy the opportunities for creativity and innovation that the Internet originally engendered. The cultural dinosaurs of our recent past are moving to quickly remake cyberspace so that they can better protect their interests against the future. Powerful conglomerates are swiftly using both law and technology to "tame" the Internet, transforming it from an open forum for ideas into nothing more than cable television on speed. Innovation, once again, will be directed from the top down, increasingly controlled by owners of the networks, holders of the largest patent portfolios, and, most invidiously, hoarders of copyrights.


Yochai Benkler

http://www.law.nyu.edu/benklery/

Benkler Yochai [2000], "Constitutional Bounds of Database Protection: The Role of Judicial Review in the Creation and Definition of Private Rights in Information", Berkeley L. & Tech. J. 15, 535 (2000)

Abstract: The article analyzes the constitutional bounds within which Congress is empowered to regulate the production and exchange of information by creating private rights. As a test case for answering this question, it analyzes the differences between two radically different bills for protecting database providers, reported in late September of 1999 by two committees of the House of Representatives. One bill, House Bill 354, is unconstitutional. The other, House Bill 1858, probably is constitutional. The article suggests that the Intellectual Property Clause and the First Amendment create two layers of judicial review over acts of Congress that recognize exclusive private rights in information. It suggests that judges have an important role to play as a counterweight to political processes in which the value of the public domain to future generations and to users is systematically underrepresented.

Benkler Yochai [2000], "An Unhurried View of Private Ordering in Information Transactions", Vanderbilt Law Rev. 53, 2063 (2000)

We stand at an unprecedented moment in the history of exclusive private rights in information (“EPRIs”).1 Technology has made it possible, it seems, to eliminate to a large extent one aspect of what makes information a public good—its nonexcludability. A series of laws—most explicitly the Digital Millennium Copyright Act (“DMCA”) and the Uniform Computers Information Transactions Act (“UCITA”)—are building on new technologies for controlling individual uses of information goods to facilitate a perfect enclosure of the information environment. The purpose of this Essay is to explain why economic justifications interposed in favor of this aspect of the enclosure movement are, by their own terms, undetermined. There is no a priori theoretical basis to claim that these laws would, on balance, increase the social welfare created by information production. The empirical work that could, in principle, predict the direction in which more perfect enclosure will move us has not yet been done. Empirical research that has been done on the effects of expanded EPRIs—in the context of patents—is quite agnostic as to the proposition that EPRIs are generally beneficial, except in very specific industries or markets. We are, in other words, embracing this new legal framework for information production and exchange on faith. Given the tremendous non-economic losses—in terms of concentration and commercialization of information production and homogenization of the information produced3—that a perfectly enclosed information environment imposes on our democracy and our personal autonomy, such a leap of faith is socially irresponsible, and, as I have argued elsewhere at great length, probably unconstitutional.

 

Benkler Yochai [2002], "Intellectual Property and the Organization of Information Production", forthcoming Int'l Rev. of L. & Ec., 2002.

Abstract: This paper analyzes an area that economic analysis of intellectual property has generally ignored, namely, the effects of intellectual property rights on the relative desirability of various strategies for organizing information production. I suggest that changes in intellectual property rules alter the payoffs to information production in systematic and predictable ways that differ as among different strategies. My conclusion is that an institutional environment highly protective of intellectual property rights will (a) have less beneficial impact, at an aggregate level, than one would predict without considering these effects, and (b) fosters commercialization, concentration, and homogenization of information production, and thus entails normative implications that may be more salient than its quantitative effects.

 

Benkler Yochai [1998], "The Commons As A Neglected Factor of Information Policy" working draft presented at Telecommunications Policy Research Conference 9/98

Abstract: Direct government intervention and privatization have long been the dominant institutional approaches to implementing information policy. Policies pursued using these approaches have tended to result in a centralized information production and exchange system. The paper suggests that adding a third cluster of institutional devices, commons, may be a more effective approach to decentralizing information production. The paper uses two examples, from spectrum regulation and intellectual property, to show that regulating certain resources as commons is feasible, and that such commons can cause organizations and individuals who use these resources to organize the way they produce information in a decentralized pattern. The paper suggests that identifying additional resources capable of being used as commons, and investing in the institutional design necessary to maintain stable commons in these resources, serves two constitutional commitments. First, commons are the preferred approach to serving the commitment that government not unnecessarily prevent individuals from using or communicating information. Second, commons facilitate “the widest possible dissemination of information from diverse and antagonistic sources.”

 

Benkler Yochai [1999], "Free as the Air to Common Use: First Amendment Constraints on Enclosure of the Public Domain", N.Y.U. Law Review 74, 354 (1999)

Abstract: Our society increasingly perceives information as an owned commodity. Professor Benkler demonstrates that laws born of this conception are removing uses of information from the public domain and placing them in an enclosed domain where they are subject to an owner’s exclusive control. Professor Benkler argues that the enclosure movement poses a risk to the diversity of information sources in our information environment and abridges the freedom of speech. He then examines three laws at the center of this movement: the Digital Millennium Copyright Act, the proposed Article 2B of the Uniform Commercial Code, and the Collections of Information Antipiracy Act. Each member of this trio, Professor Benkler concludes, presents troubling challenges to First Amendment principles.

 

Benkler Yochai [2000], "From Consumers to Users: Shfiting the Deeper Structures of Regulation Towards Sustainable Commons and User Access", Fed. Comm. L.J. 52, 561 (2000)

Abstract: As the digitally networked environment matures, regulatory choices abound that implicate whether the network will be one of peer users or one of active producers who serve a menu of prepackaged information goods to consumers whose role is limited to selecting from this menu. These choices occur at all levels of the information environment: the physical infrastructure layer—wires, cable, radio frequency spectrum—the logical infrastructure layer—software—and the content layer. At the physical infrastructure level, we are seeing it in such decisions as the digital TV orders (DTV Orders), or the question of open access to cable broadband services, and the stunted availability of license-free spectrum. At the logical layer, we see laws like the Digital Millennium Copyright Act (DMCA)2 and the technology control litigation that has followed hard upon its heels, as owners of copyrighted works attempt to lock up the software layer so as to permit them to control all valuable uses of their works.3 At the content layer, we have seen an enclosure movement aimed at enabling information vendors to capture all the downstream value of their information. This enclosure raises the costs of becoming a user—rather than a consumer—of information and undermines the possibility of becoming a producer/user of information for reasons other than profit, by means other than sales.


Serena Syme and L. Jean Camp, (Kennedy School of Government, Harvard University, Cambridge, MA 02138)

http://papers.ssrn.com/abstrct=297154

Syme S., Camp L.J. [2001], "Code as Governance, The Governance of Code", Working Paper, Social Science Research Network Electronic Paper Collection, April 2001, RWP01-014.

Abstract: The governance of a network society is tightly bound to the nature of property rights created for information. The establishment of a market involves the development of a bundle of rights that both create property and define the rules under which property-based transactions might occur. The fundamental thesis of this work is that the creation of property through licensing offers different views of the governance of the network society. Thus this article offers distinct views of the network society drawn from examinations of the various forms of governance currently applied to code, namely: open code licensing, public domain code, proprietary licenses, and the Uniform Computer Information Transactions Act (UCITA). The open code licenses addressed here are the GNU Public License, the BSD license, the artistic license, and the Mozilla license. We posit that the licenses are alternative viewpoints (or even conflicting forces) with respect to the nature of the network society, and that each has its own hazards. We describe the concepts of openness: free redistribution, source availability, derivations, integrity, non-discrimination, non-specificity, and non-contamination. We examine how each license meets or conflicts with these conditions. We conclude that each of these dimensions has a parallel in the dimension of governance. Within our conclusions we identify how the concept of code as law, first described by Stallman and popularized by Lessig, fails when the particulars of open code are examined. However, we explore the ways that licenses together with code provide a governance framework, and how different licenses combined with code provide a range of visions for governance of the information society. We go on to consider the fundamentally different governance model outlined by UCITA, and comment on the philosophical implications and hazards of such a framework for the world of code.



Pamela Samuelson, University of California, Berkeley, and Randall Davis, MIT

http://www.sims.berkeley.edu/~pam/papers.html

Samuelson P., Davis R. [2000], "The Digital Dilemma: A Perspective on Intellectual Property in the Information Age", TPRC (Telecommunications Policy Research Conference) 2000.

The trio of technological advances that have led to radical shifts in the economics of information are these: (1) information in digital form has changed the economics of reproduction, (2) computer networks have changed the economics of distribution, and (3) the World Wide Web has changed the economics of publication.

Legitimate copies of digital information are made so routinely that the act of copying has lost much of its predictive power as a bottleneck for determining which uses copyright owners should be able to control or not control. So many copies are made in using a computer that the fact that a copy has been made tells us little about the legitimacy of the behavior. In the digital world, copying is also an essential action, so bound up with the way computers work that control of copying provides unexpectedly broad powers, considerably beyond those intended by the copyright law.

Both technology and business models can serve as effective means for deriving value from digital intellectual property. An appropriate business model can sometimes sharply reduce the need for technical or legal protection, yet provide a way to derive substantial value from IP (intellectual property). Models that can accomplish this objective range from a traditional sales model (low-priced, mass market distribution with convenient purchasing, where the low price and ease of purchase make buying more attractive than copying), to the more radical step of giving away an information product and selling a complementary product or service (e.g., open source software).

Litman J. [2001], Digital Copyright, New York: Prometheus Books. 2001. Pp. 208

 


Open source and cooperation

 

Lerner J., Tirole J. [1999], The simple economics of Open Source, mimeo, idei, Université of Toulouse (http://www.idei.asso.fr/

There has been a recent surge of interest in open source software development, which involves developers at many different locations and organizations sharing code to develop and refine programs. To an economist, the behavior of individual programmers and commercial companies engaged in open source projects is initially startling. This paper makes a preliminary exploration of the economics of open source software. We highlight the extent to which labor economics, especially the literature on “career concerns,” and industrial organization theory can explain many of these projects’ features. We conclude by listing interesting research questions related to open source software.

On Colla.net Tirole writes:

Dessein [1999]. Dessein shows that a principal with formal control rights over an agent's activity in general gains by delegating his control rights to an intermediary with preferences or incentives that are intermediate between his and the agent's. The partial alignment of the intermediary's preferences with the agent's fosters trust and boosts the agent's initiative, ultimately offsetting the partial loss of control for the principal. In the case of Collab.Net, the congruence with the open source developers is obtained through the employment of visible open source developers (for example, the president and chief technical officer is Brian Behlendorf, one of the cofounders of the Apache project) and the involvement of O'Reilly, a technical book publisher with strong ties to the open source community.


 

Free software and security

 

Ross J. Anderson, University of Cambridge Computer Laboratory, New Museums Site, Cambridge CB2 3QG, UK

http://www.cl.cam.ac.uk/~rja14/

Since about the middle of 2000, there has been an explosion of interest in peer-to-peer networking - the business of building useful systems out of large numbers of intermittently connected machines, with virtual infrastructures that are tailored to the application. One of the seminal papers in the field was The Eternity Service, which I presented at Pragocrypt 96. I had been alarmed by the Scientologists' success at closing down the penet remailer in Finland, and had been personally threatened by bank lawyers who wanted to suppress knowledge of the vulnerabilities of ATM systems. This made me aware of a larger problem: that electronic publications can be easy for the rich and the ruthless to suppress. They are usually kept on just a few servers, whose owners can be sued or coerced. To me, this seemed uncomfortably like books in the dark ages: the modern era only started once the printing press enabled seditious thoughts to be spread too widely to ban. The Eternity Service was conceived as a means of putting electronic documents as far outwith the censor's grasp as possible. (The concern that motivated me has unfortunately now materialised; a recent UK court judgment has found that a newspaper's online archives can be altered by order of a court to remove a libel.)

AndersonR.J. [1996], "The Eternity Service", Working Paper

http://www.cl.cam.ac.uk/~rja14/

The Internet was designed to provide a communications channel that is as resistant to denial of service attacks as human ingenuity can make it. In this note, we propose the construction of a storage medium with similar properties. The basic idea is to use redundancy and scattering techniques to replicate data across a large set of machines (such as the Internet), and add anonymity mechanisms to drive up the cost of selective service denial attacks. The detailed design of this service is an interesting scientific problem, and is not merely academic: the service may be vital in safeguarding individual rights against new threats posed by the spread of electronic publishing.

 

Stajano F., Anderson R. [1999], "The Cocaine Auction Protocol: On The Power Of Anonymous Broadcast", In A. Pfitzmann, ed. Proceedings of Information Hiding Workshop 1999, Dresden, Germany

Traditionally, cryptographic protocols are described as a sequence of steps, in each of which one principal sends a message to another. It is assumed that the fundamental communication primitive is necessarily one-to-one, so protocols addressing anonymity tend to resort to the composition of multiple elementary transmissions in order to frustrate traffic analysis. This paper builds on a case study, of an anonymous auction between mistrustful principals with no trusted arbitrator, to introduce \anonymous broadcast" as a new protocol building block. This primitive is, in many interesting cases, a more accurate model of what actually happens during transmission. With certain restrictions it can give a particularly efficient implementation technique for many anonymity-related protocols.

 

Needham R., Anderson R. [1995], "Programming Satan's Computer", In Leeuwen, J. van (ed.); Computer Science Today, Recent Trends and Developments; p. 1-14; ISBN 3-540-60105-8; Berlin, Heidelberg, New York, Springer Verlag In Springer (ed.), Lecture Notes in Computer Science, vol. 1000

Cryptographic protocols are used in distributed systems to identify users and authenticate transactions. They may involve the exchange of about 2-5 messages, and one might think that a program of this size would be fairly easy to get right. However, this is absolutely not the case: bugs are routinely found in well known protocols, and years after they were first published. The problem is the presence of a hostile opponent, who can alter messages at will. In eff ect, our task is to program a computer which gives answers which are subtly and maliciously wrong at the most inconvenient possible moment. This is a fascinating problem; and we hope that the lessons learned from programming Satan's computer may be helpful in tackling the more common problem of programming Murphy's.

 

Petitcolas F., Ross J. Anderson R., Kuhn M. [1999], "Information Hiding: A Survey", Proceedings of the IEEE, special issue on protection of multimedia content, 87(7):1062-1078, July 1999.

Information hiding techniques have recently become important in a number of application areas. Digital audio, video, and pictures are increasingly furnished with distinguishing but imperceptible marks, which may contain a hidden copyright notice or serial number or even help to prevent unauthorised copying directly. Military communi cations systems make increasing use of traffic security techniques which, rather than merely concealing the content of a message using encryption, seek to conceal its sender, its receiver or its very existence. Similar techniques are used in some mobile phone systems and schemes proposed for digital elections. Criminals try to use whatever traffic security properties are provided intentionally or otherwise in the available communications systems, and police forces try to restrict their use. However, many of the techniques proposed in this young and rapidly evolving field can trace their history back to antiquity; and many of them are surprisingly easy to circumvent. In this article, we try to give an overview of the field; of what we know, what works, what does not, and what are the interesting topics for research. Keywords: Information hiding, steganography, copyright marking. usually interpreted to mean hiding information in other information.