Chapter 4: Private Networks and Public Access

Private Networks and Public Access: Technology and Copyright law

The “transforming revolution” of the twentieth century, the advent of digital technology radically changed the shape of “literary works,” their production, exchange and consumption as well as the paradigm of the debates around copyright laws. The changing nature of the author, proprietary nature of ideas, ownership and control all coalesce in and are invalidated by the shifting nature of the new technical environment. According to Peter Menell, after the printing press and broadcast technology, the digital revolution is the third wave of technological revolution. Enabling new modes of creative expression, it poses a new set of challenges for those engaged in literary and cultural production and exchange of cultural works. Jessica Littman notes that the growth of the Internet has in many ways surpassed any historical analogy we can unearth, bringing an explosion of new possibilities in the process.

The multifarious compression technologies, liberation from the time-space constraints and vast information network, the digital era would logically seem a boon for the “encouragement of learning” which was the dictate of one of the first copyright legislations in History. The enlightenment concept of intellectual property: to dismantle commercial monopolies on the circulation of thought and to spread knowledge freely among our citizenry would find a close ally in the internet, expressly designed as a tool for sharing information. There is a nagging doubt, however, that plagues copyright industries, as has done historically with the advent of new technologies. We have seen how the system of copyrights, though standing on the shoulders of authors, actually preserves market share and control of the actual ‘owners’, i.e. publishers and distributors. For entertainment and information companies at the turn of the twenty-first century, the Internet held out not only the promise of vast new markets for their products but also the threat that widespread, unauthorized copying would devastate everyone of their markets, old and new.

The story of copyright, in one way, is also the story of the law’s tenuous relationship with technology. Moments of technological leaps become sites for the introspection/examination and realignment of copyright legislation with the changing face of culture, creativity and knowledge. As the ground shifts, cracks appear and technology slips through the legal boundaries, unencumbered by proprietary regimes and monopoly interests. Copyright owners have always expressed anxiety around technological changes that transform the way we create, experience and share cultural works. Each technological leap renews the discourse around copyrights, culture and knowledge, and the new possibilities for freedoms as well as control emerge, polarising the debate. While the proponents of free culture declare new freedoms, the copyright industries and policy makers aggressively reassert property rules and ownership rights to rein in the transformation.

Paul Goldstein, points out two approaches to new technology expressed in a two-day meeting convened in Amsterdam in July 1995 by the Royal Netherlands Academy of Sciences and the University of Amsterdam’s Institute for Information Law. Speaking at the meeting, John Perry Barlow, a former lyricist for the Grateful Dead, invoked the mantra, “information wants to be free” to support his claim that “we are sailing into the future on a sinking ship.” Earlier, Barlow had written about copyright law and new technologies in Wired Magazine in 1994. Using the metaphor of wine to address the idea expression divide, he said Copyright protects the bottles and not the wine. But now, the bottles have overflowed and the system makes no sense. At the conference however, it was the presentation by Charles Clark, the Legal Advisor to the International Publishers Copyright Council. In his presentation titled “The Answer to the Machine is in the Machine” thee main question raised by Clark was not how to prevent access and protect works from theft, but how to monitor access and use. He proposed the use of the new technologies to regulate the flow of information so as to provide for payment for use. The debate on copyrights in the digital arena is suspended between these two polarities. Both in one way or another reject the copyright system as adequate to contain the technological changes. Barlow rejects copyrights outright, in favour of “an entirely new set of methods” whereas Clark proposes a strict regulatory regime where owners would be able to attach price tags to all and any uses of works, overriding the public good mandate of copyright law. Historically, copyright law has tended to follow the growth of technology and similarly expanded its purview. It has come to signify ever-increasing ‘rights’, adding newer and more intricate laws to its growing bundle. As Peter Menell notes,

“more pages of copyright law have been added to the U.S. code in the past decade than in the prior 200 years of the republic, dating back to the first U.S. Copyright Act adopted in 1791.”

The potential of digital technology to change the shape of existing literary works as well as the paradigm for future creation requires careful deliberation. As we will see, policy discussions on copyright legislation have a direct bearing on private individuals deemed criminal by the changing shape of the law. This chapter aims to undertake a study of copyright law in the digital age to understand the various positions, nuances and possibilities that have been articulated in contemporary copyright discourse.

The study relies heavily on developments in the United States which is the primary site for the growth of both technology and copyright law. The U.S. has also emerged as the primary site for the development of copyright discourse in current times. As stated by Fiona Macmillan and Kathy Bowrey, “both from a societal and paradigmatic view that today’s copyright law is shaped by a combination of two phenomena, which paved the way for the information Society: computer technology and millennium capitalism. All this accompanied by the ever-increasing political dominance of the USA.” As discussed earlier, with the expansion of the U.S. as an information economy, its role and stake in global copyright legislation increased manifold, largely with the agenda of furthering capitalist interests. The shift of the U.S. from an industrial to an information economy also placed it at the forefront of technological changes that became the concern of copyright legislation across the world. This study of the interaction of copyright law and new technologies attempts to look at the implications of the development of thelawonthefuture,withAmericanadvocacyforstringent international copyright regimes as a background.

Tech Talk: Mapping new technologies on old terrain

Because digital technologies flow seamlessly and invisibly across national borders, Governments can’t patrol the cyber space. Cyberspace is indeed a new world but according to Harvard Business School Professor, Deborah Spar, it is not the only new world. There have been moments in history when the birth of a new technology has called forth a complete restructuring of our physical and intellectual landscape, bending authority to their will and reaping profits along the way. Spar tries to remove the ideology of the Internet, away from the discourses of “new technology” and instead sees the genesis in older and dimmer roots. The roots lie with Pioneers such as Thomas Edison and Marconi, who saw the fantastic opportunity of technology. And the Internet for the first time shatters our notion of what a “state” does or what a national economy is.

Theoretically this shift in geography should be a tremendous boon to firms, just as it is a rather terrifying prospect for states. Freed from Government control firms should be able to come up with their own regulation. This after all is the political thrill of the Net. Yet Spar suggests that other stories of technological advancement have also been united in their attempt to rule the State dead. However, the point to be remembered in all this as Spar says is that while eventually Governments do survive, because ironically both entrepreneurs and society want them. Government provides the property rights that entrepreneurs eventually want; they provide the legal stability that commerce craves; and they provide the stability that society demands. In the end even pirates and pioneers want order.

Spar traces the nuances of information transmission back to the 15th century when Information was an exclusive domain of the Catholic church. As each new technological leap made the dissemination of ideas easier and more rapid, the hegemonic classes grew wary of the revolutionising potential therein. She traces this trend in the reaction of the Church to the printing press, expressed first in the anxiety around the increased access that it leant to the Bible. Rules changed and power shifted, but the patterns that had dominated before Gutenberg’s time remained solidly in place.

An equally striking dynamic surrounded the development of radio, another major stage in the information revolution. In 1896, Marconi devised a small black box that transmitted Morse code via electromagnetic waves. But when he crossed the border, custom officials smashed the box to pieces, fearing that it would inspire violence and revolution. Marconi eventually created a firm for himself, designing the radio for commercial purposes and before long the Government reappeared and declared a security interest in Marconi’s device. By the start of the 1st World War, the Marconi Company had become a full-fledged contractor for the British Government and the British navy controlled the fledgling technology of radio transmission.

Looking at the patterns of both printing and radio, we see that technology challenges authority for some period of time, but then ironically, seems to invite this authority back in. Perhaps the Internet is different as Spar says and perhaps so revolutionary and so international that it will disrupt the patterns that have prevailed in the past. But what is most important to note here, in case of each technological frontier is that in each case the technological leap has created a political gap. Deborah Spar also argues that business is inherently political and the interests of commerce mark politics. This overlap is particularly strong along the edges of technological frontiers, for it is here that markets are created, where industries spring to life and then settle down in to some kind of an ordered existence. As this process unwinds, power is distributed and structures established. It is a hugely political enterprise- even if Governments are not actively calling the shots or regulating commercial activity. If there are pioneers, there will be pirates as well, because as Henri Pirenne said- “Piracy is the first stage of commerce.” Between the 15th and the 17th century, advances in navigation had literally opened up a new world of commerce. Throughout this period a steady stream of innovation had enabled European explorers to push in to what for them was terra incognita, unknown territory or virgin land.

Towards the 17th century, the prospects for economics became the driving factor. By the time the merchants set sail, they were however, no longer alone. Instead the seas were full of pirates- rogue sailors or freebooters whose business lay in seizing merchant ships and grabbing whatever might be on board. But what was really the difference between the pirates (who looted on their own behalf) and privateers (who raided on behalf of the state)? In the raucous days of the 17th century, it was hard to tell. The pirates simply took advantage of a classic gap between law and technology because in the middle of the Atlantic, there were no property rights or ways to enforce them. According to Spar, life along the technological frontier moves through four distinct phases: innovation, commercialization, creative anarchy and rules.

Phase one: Innovation

This is the stage of tinkerers and inventors, a stage marked by laborious exploration and the sudden thrill of discovery. This is the phase that sparks the imagination and provided motivation for the next generation of dreamers and planners. It is not a phase where lots of commerce takes place. Even the Internet was distinctly non-commercial at the outset. It was a security tool, a means of communication among a small and specialized group. During this phase there are no rules because none are needed: innovation hasn’t developed to the point where property rights are critical; there are no questions yet of access or unfair competition; and the societal impact of the new technology is minimal.

Phase two: Commercialization

Once the technology is out the labs in to the public eye, a whole new set of characters move on to the frontier: the pioneers, the pirates, the marshals and the outlaws. In this second phase people will now see the commercial benefits of the new technology and its ability to turn in to mass market. When the technology is truly revolutionary, they can also see how it carves out new spheres of commerce, spheres that exist beyond the realm of existing markets and the reach of authorities. Speed is essential during this phase because tempted by the dual visions of anarchy and wealth, entrepreneurs of all sorts rush in to the frontier to claim their stake. Their interests during this phase are largely territorial. Apart from the pioneers, pirates too foray in to the new land, often blending with the pioneers. In the 19th century pirates plagued the nascent telegraph industry, “borrowing” the patented technology to create their own competing systems. And in recent times, pirates have stolen television signals from the skies and encryption codes from the Net. Pirates seem to adhere to a certain historical rhythm. When technology is new it doesn’t attract too many rogues. It is simply too technical and uncertain in the first phase, Once technology slips in to the commercial realm and begins to generate extraordinary profits, in the second phase that the pirates begin to flock. Because rules during this period are ill defined, pirates can operate almost without any restriction. This trend is noticeable in the case of people like Philip Zimmerman, the mathematician who created the world’s most sophisticated encryption algorithms and posted it on his website. What does one label him as: a mathematical genius or a renegade intent on violating the security of United States. During these times, the rules are just too flimsy to tell. Unless Governments manage to nip technology in the bud of innovation (like television), it is very difficult for them to control this same technology once it has entered the expansionary stage. Likewise, there are many aspects of the Internet economy that at the turn of the century at least appear far beyond the reach of any national government: content that streams in from foreign sources and information that hides under disguised names and slips across invisible borders. In this phase therefore, the politics of the frontier are decidedly libertarian, markets take over, individuals steer their own fate and governments retreat.

Phase Three: Creative Anarchy

The romantic period of phase 2 does not last very long and problems begin to burst out along the frontier, comprising the commerce that has already emerged and threatening its long-term development. And the pioneers that now people the frontier will demand their resolution. During the early phases of development, ownership is secondary and often an irrelevant concern. However, as technology matures and markets widen, a demand for property rights is liable to emerge. Now that they have carved out positions along the frontier, the more established pioneers no longer want to work in chaos or cavort with pirates. Instead, they want to own markets and keep interlopers at bay and want ways of enforcing property rights.

There are 3 problems that are often concomitant with this stage:

  1. The final problem concerns competition. Often the levers of a new technology and potentially vast market are put under the control of a single firm. It is a problem of dominance, and control; a problem of innovation and a problem of justice. And like all problems these three can create the situation of anarchy. Resolving these problems becomes the final stage of the frontier.
  2. Problem of the commons: In these cases the creation of the new market rests with the use of a particular resource- one which is large but far from the infinite, like the oceans or airwaves. In these situations the more established settlers would again petition for property rights.
  3. Problem of coordination: when a technology is first evolving, a burst of innovation will tend to produce multiple devices and systems. But if technologies are to develop in to fully-fledged markets, they need to develop some set of common standards, some means of coordinating their systems and allow users to migrate freely among them. And the problem is that standards do not emerge by themselves.

Phase Four: Rules

Once the pioneers realize the cost of anarchy, and that the lack of rules can diminish their own financial prospects, they begin to lobby for what they once explicitly rejected. It is also not the pioneers always who clamour for rules, but sometimes also the state and the coalition of societal groups affected by the new technology. In general rules get created because private firms want them. It is fruitful to track these four phases in the digital technology and it is probably the area of music that has always embraced and yet tried to regulate the processes of technology.

Consider this rap by Chuck D:

If you don’t own the master
Then the master owns you.
Dollar a rhyme
But we barely get a dime.

In 1998 Chuck D stormed in to the cyberspace. Rather than giving his latest songs to Def Jam, the label that had produced his music for over a decade, the rap artist instead released his music directly on to the Internet, at www.publicenemy.com. With just a couple of songs he challenged how music was sold and even more fundamentally how it was owned, spelling bad news for the music business. However, for Chuck D putting music online was a matter of power and giving recording artists the influence and the money that was rightfully theirs. For decades before that, companies such as EMI and Polygram had operated under a lucrative set of rules. They signed long term contracts with the artists they deemed most attractive, and then managed the business side of the artists’ career- recording, distributing and marketing the artists. The artists received a prearranged portion of the sails, but the studio retained the legal rights to the music. Ownership of the property, in other words, rested with the studios than with the artists. By putting his music directly online, Chuck D circumvented the entire legal and commercial structure that the studios so carefully erected. He took music which had traditionally been their property and made it his again. The advent of digital technologies such as the MP3 made meant that the entire legal foundation of the old recording system was thrown in to confusion. Because these practices were so new in the late 1990s, the law was simply silent on them: there was no regulation of MP3 technology and no system of property rights that exclusively applied to online music.

Chuck D. however, was not the first one to explore the uncharted terrain of digital technology. In fact, it was in the field of music where early exploration of the technological frontier had revealed a world of new creative and commercial possibilities. Copyright scholar and author Siva Vaidyanathan traces the advent of digital technology through the transformation it brought about in the field of music. He mentions in particular, Herbie Hancock’s experiments with the electronic synthesizer.

The Digital Moment: birth of the future

As a keyboard player, Hancock soon discovered the creative potential of a new instrument-the electronic synthesizer. Synthesizers offered Hancock and other composers a new set of sounds and new ways to manipulate them. Keyboard players could generate thousands of new sounds: buzzes, chirps, whistles, solid tones (with unlimited sustain), crashes, and sirens. Players could alter the pitch, duration, and timbre of a song by tweaking a few knobs or dials.’ Early synthesizers were huge and ungainly, difficult to employ for live performances. They used analog technology. Different electric voltages created and controlled the sounds. Higher voltages generated higher notes and lower voltages created lower notes. By the rnid-1970s, several companies had introduced polyphonic analog synthesizers with attached keyboards. Soon synthesizer companies added computer memory to their systems, making it easier to use smaller synthesizers in live shows. By 1979, keyboards came with computer interfaces installed. If all of a musician’s synthesizers were of the same brand, they could operate together through a single keyboard. Hancock, enchanted by the new gadgets, customized connections for his various synthesizers so they would work in concert. Hancock’s hacking inspired the next Revolutionary move in electronic music the creation of an open compatibility standard known as the Musical instrument Digital Interface, or MIDI, in 1982. MIDI software protocols tell a synthesizer the duration of a note, the shape and pitch of a sound, and its volume.’

MIDI transforms the analog signal of a synthesizer into a digital stream, representing all the variances of sounds in a string of zeros and ones. And MIDI allows that information to flow over a network of musical instruments and input and output devices.

Within a couple of years, MIDI became the universal standard for digital music. And its success opened the music industry to the potential of converting every step in its production process to digital technology. The MIDI standards are now used by home computers to generate, share, and play music and video files. At its heart, MIDI is like the blues-based music that inspired Herbie Hancock’s career-portable, widely compatible with a variety of instruments, open for anyone to improve, and thus powerfully adaptable.’

The parallels between jazz and open technology were not lost on Hancock, who had been an engineering student at Grinnell College in the 1950s writes Vaidyanathan. In 1983, Hancock released an electronic album called Future Shock. It featured a single called “Rockit” that soon climbed to the top of dance and soul charts and garnered a Grammy award for best rhythm and blues single. The song featured sampled sounds and “scratches” such as rap artists were using over a bed of jazzy electronic keyboard riffs.

Vaidyanathan also notes some of Hancock’s experimentation with technology in music that pre-empted some significant trends brought on by digital era. Hancock released a video of the song at a time when MTV was in its infancy thereby not only inspiring the digitization of music in general and the daring fusion of pop music styles but helping establish the music video as a site of intense creativity in the early 1980s. He was also instrumental in making sampling acceptable as an artistic technique within the African American musical tradition. In 1993, Hancock allowed the rap group Us3 to sample his1964 classic “Cantaloupe Island.” Us3 worked with, the Blue Note jazz catalogue to create the hit album Hand on the Torch, which opens up with the funky dance single “Cantaloop.” Sampling requires converting a song from analog to digital signals and manipulating it to make it part of a new work. In doing so, Herbie Hancock’s musical career shows his experimentation with the various aspects of digital technology that were to cause a sea change in the way we perceive creativity, culture and knowledge.

Ones and Zeros: defining the digital revolution

The string of ones and zeroes could be used to express virtually any form of creative expression in a digital environment. Intellectual property was never before so vaporous and intangible as in the digital realm. Vaidyanathan highlights the two most significant processes inherent in the digital moment; “the digital representation of all forms of expression and the rise of the networks.” In recent times, the increasing speed of the networks as well as the storage space for digital ‘files’ has contributed to the increased acceptance of digital technology by artists, musicians, hackers, intellectuals, policy makers and business leaders.

The synergystic relationship between these two processes – digitization and networking – has collapsed some important distinctions that had existed in the American copyright system for most of the twentieth century;

  • The possibility of representing Ones and zeros are the simplest possible grammar through which we can express anything. A living, breathing symphony orchestra may be the most complex medium one could choose to express the same notes. And the analog vibrations in the air that fills a symphony hall might be the most complex grammar one could use to express those ideas. Perhaps the ones and zeros are ideas, and the analog versions we inhale are the expressions. But if strings of ones and zeros operate as an alphabet, a code, for representing ideas, shouldn’t they enjoy status as expressions? Are strings of digital code expressions worthy of both copyright protection and First Amendment protection?
  • The digital moment has also collapsed the distinctions among three formerly distinct processes: gaining access to a work; using (we used to call it “reading”) a work; and copying a work. In the digital environment, one cannot gain access to a news story without making several copies of it. If I want to share my morning newspaper with a friend, I just give her the object. I do not need to make a copy. But in the digital world, I do. When I click on the web site that contains the news story, the code in my computer’s random access memory is a copy. The source code in hypertext markup language is a copy, and the image of the story on the screen is a copy. If I want a friend to read the story as well, I must make another copy that is attached to an e-mail. The e-mail might sit as a copy on my friend’s server. And then my friend would make a copy in her hard drive when receiving the e-mail, and make others in RAM and on the screen while reading it. Copyright was designed to regulate only copying. It was not supposed to regulate one’s rights to read or share. But now that the distinctions among accessing, using, and copying have collapsed, copyright policy makers have found themselves faced with what seems to be a difficult choice: either relinquish some control over copying or expand copyright to regulate access and use, despite the chilling effect this might have on creativity, community, and democracy.
  • The third distinction that the digital moment collapsed is that between producers and consumers of information and culture. The low price of network-ready computers and digital equipment in the United States has reduced the barriers to entry into music, literature, news, commentary, and pornography production and distribution. Technology transforms bedrooms into recording studios and chat rooms into billboards. A musician can create music on a computer sitting in a small room and instantly distribute the digital tracks worldwide. Artists and writers could publish their work on blogs and create a fan base without the aid of publishing houses and marketing campaigns. Of course, the ease of distribution and the low barriers of entry have created a cacophony of “white noise” in the digital environment. Creativity has been democratized, but it’s that much harder to attract an audience or a market.
  • Digitization and networking have also collapsed the distinctions between local and global concerns. The U.S. Congress can outlaw gambling on the Internet. But the U.S. government has no authority to regulate a server on a small island in the Caribbean Sea. As with all questions of digital regulation, what jurisdiction should rule on copyright concerns?
    The distinctions among the different types of “intellectual property” have also eroded, if not collapsed. They have certainly collapsed in the public mind and generated much confusion in public discourse. The distinctions also have collapsed in practice. For instance, computer software was until the late 1980 the subject of copyright protection. Then the U.S. Patent Office started issuing patents for algorithms. As the industry has grown, so have the stakes in its legal protection. Now software can carry legal protections that emanate from copyright, patent, trademark, trade secret, and contract law. So while the phrase “intellectual property” was merely a metaphor and an academic convention in the 1960s, by 2000 it was a reality.

The collapse of some of the fundamental principles of copyright regimes presents a complex challenge to legal structures that depend on the past to encapsulate the future. The immediate response of the policy makers and the culture industry was to expand the existing laws of proprietorship from the print era on to the digital world. As with the transplanting of the rules of tangible property to intellectual works, this linear application of the existing laws to a new environment was fraught with problems. Jessica Littman critiques this easy solution by clarifying that the protection of intellectual property in the digital world through pre-existing rules necessitated the creation of a large number of rules and laws as the modes creativity and exchange entirely changed in the digital environment. Technology also enables new forms of expression that are beyond the scope of existing legislation. The tendency of the copyright regime has been to rely on existing parameters to filter the use and shape of new technology. As Vaidyanathan indicates, however, the new forms of expression explode the boundaries of the law and obfuscate the traditional categorisations.

Code of Law: the protection of software programs

The protection of computer programs brought about one of the early interactions of copyright law and digital technology. For early computer programs, there was little need for copyright (or patent) protection. Computers were few in number and most software was custom-developed for in-house applications. It wasn’t until the early 1960s that computer programs were being actively marketed by a software industry besides the computer manufacturers. Before it was widely marketed, software was easy to protect by a contract or license agreement any computer program that was being marketed.

According to the Copyright Office, the first deposit of a computer program for registration was on November 30, 1961. North American Aviation submitted a tape containing a computer program. While the Copyright Office was trying to determine whether such a deposit could be registered, two short computer programs were submitted by a Columbia University law student to determine how a computer program might be registered. One computer program was submitted as a printout published in the Columbia Law School News on April 20, 1964, while the other was on magnetic tape. The copyrights for both student computer programs were registered in May 1964, and North American Aviation’s computer program was registered in June 1964.

In the U.S., the Copyright Act of 1976, which became effective on January 1, 1978, made it clear that Congress intended software to be copyrightable. The definition of literary works in Section 101 states that they are:

“Works, other than audiovisual works, expressed in words, numbers, or other verbal or numerical symbols or indicia, regardless of the nature of the material objects, such as books, periodicals, manuscripts, phonorecords, film, tapes, disks, or cards, in which they are embodied. {FN7: 17 U.S.C. §101}”

Furthermore, the House Report discussing the Act states:

The term “literary works” does not connote any criterion of literary merit or qualitative value: it includes catalogs, directories, and similar factual, reference, or instructional works and compilations of data. It also includes computer data-bases, and computer programs to the extent that they incorporate authorship in the programmer’s expression of original ideas, as distinguished from the ideas themselves. {FN8: H.R. Rep. No. 94-1476 at 54}

Soon after, the 1980 Computer Software Protection Act was signed. Yet what was not clear was how much protection Congress intended to give computer programs, and whether there should be special exceptions to the exclusive rights of the copyright owners, as was the case for some other types of works.

Copyright Scholar and author Halbert goes on to analyse the construction an analogy between software and traditional literary works. She refers to the work of Anthony Lawrence Clapes, assistant general counsel at IBM as an example of the argument for strong protection. Clapes quotes from a variety of computer scientists to portray programming as an intellectual endeavour that combines both art and science. It is intuitive, imaginative and “pure thought stuff.” “A tangible form of dream and imagination.” He describes the programmer as a typical eccentric artist who loves intellectual challenge.

According to Halbert, the conflation between a computer programme and a poetic or literary work ignores two important points. First, that in highlighting the creative aspect of the individual programmer it ignores the utilitarian function of the programme, which necessitates the re-use of the code. It ignores the development of the software industry through collaborative work, which was supplanted by the introduction of the profit motive. Secondly, it ignored the fact that like traditional copyright, the ownership does not rest with the programmer, but with large companies like IBM. The myth of the romantic author concealed the political economy of software production and design. The positing of computer programs as literary works primarily serves software companies who want to maintain monopoly over their product’s market share. Many scholars have deemed the long term of protection given literary works unfit for an area that constantly changes and develops. Patent laws with their smaller term and utilitarian bent have been suggested as the more appropriate to protect computer programs.

Copyright protection of software programs involves the consideration of two important components; the source code (or the program code and the object code. The first question deals with the protection given to the literal code that can be read by the programmer. The object code is read by the computer and is indecipherable to humans. The early debates about software programs dwelled on the protection that can be granted to these separate elements. Whether the source code, which is the literal, readable code, deserves protection or does protection extend to the programming code that gives the final “look and feel” to the software program. Looking at some early cases provides an insight into the types of claims made by intellectual property owners to gain legal protection for what they see as their property. Halbert examines a series of judgements on cases of copyright violation in software in the 1980s and 1990s to highlight the contradictory readings of the courts with regard to the protection of software programs. While in the cases of Apple Computer, Inc. v. Franklin Computer Corp. and Her reading of the amicus curaie briefs in the case of Lotus v. Borland reveals that while major software companies vied for strong protection of software, a number of computer scientists, programmers, law professors and users groups all favoured the less protective stand of the courts.

Discussions at the Uruguay Round of multilateral Trade Negotiations finally put a seal on these debates when the TRIPS agreement incorporated the provision, “computer programs, whether in source or object code, shall be protected as literary works under the Berne Convention.” This was subsequently reiterated in WCT when it stated that protection applies to computer programs whatever may be the mode or form of their expression. As Halbert indicates, the trend has been towards the expansion of the copyright regime to all forms of creativity without considering the benefits of structures of ownership and control to new forms of expression.

Sweat of the Brow: Owning Data[bases]

Originality has been one of the core principals of copyright law since its inception. The monopoly given over literary works was in essence due to the original expression of the author in which he would be given a limited monopoly. While originality is the goal, other works that involve a significant amount of labour have also received copyright protection, making two standards of copyright possible. The first is an original creation, the second is a work that is produced by “the sweat of the brow.” Both involve authorial rights the strongest of which are given to the original work. The decision of Feist Publications v Rural Telephone Inc. in 1991 brought the discourse on original authorship into the digital age. It touched on the issue of database protection and showed the continuing importance of the notion of original authorship in literary works.

A database is a collection of information or available facts such as names, addresses, phone numbers etc. different databases may be designed to organise facts according to the different and specific needs of its users. The database industry has grown considerably since computers fist made it easy to store and catalogue information digitally. The specific utilitarian value and the resulting revenue would suggest that databases would be strictly protected. However, in the case of Feist Publications v Rural Telephone Inc. the U.S. Supreme Court ruled there was not sufficient originality in the White Pages to constitute a creative work. A phone book company regardless of the time, effort and money involved in compiling a directory, could not claim copyright protection over the mere information in the text. The court defined originality as : “to qualify for copyright protection, a work must be original to the author. Original as the term is used in copyright, means only that the work was independently created by the author (as opposed to copied from other works), and that possessed at least some minimal degree of creativity. To be sure the requisite level is extremely low; even a slim amount will suffice..”

The court in this case provided thin copyright protection to databases and allowed others to lift the facts from a publication for inclusion in a competing publication, the recent legislative trend, however, is to expand the protection given to databases. As with software programs, the trend of legislation with databases has also been to vie for maximum protection. Siva Vaidyanathan discussed the efforts o American and European negotiators to create a new form of intellectual property law that would consider databases protectable outside the constraints of American copyright law. Here again, the “sweat of the brow” principal is used to explain the demand for protection of the authorial efforts in compiling a database. The demands regarding databases in international conventions will be detailed later in the chapter, however, it would be useful to look at the problems with the proposed protection of databases as explained by Vaidyanathan. Writing specifically about scholastic enterprise, Vaidyanathan elaborates the threat to research posed by the protection of databases as literary works. The proposed regulations mandate the renewal of copyright each time new data is added to the database. This would, in effect, mean perpetual renewal and perpetual copyright control of the same. Moreover, if Clark’s suggestion of strict regulation of use are realized, access to the database would have to be purchased at the price the database company deems fit. A researcher would have to pay for each cross-reference and each individual statistic that is used from the database. Under traditional regimes, this might be protected by the fair use principal, but access controls would mandate a payment to enter the database and view the data contained therein. The biggest irony that database protection presents is the exclusion of the subjects of the database from accessing information compiled on them. It would allow corporate interests to monopolize information that belongs in the public sphere.

THE “DIGERATI” AND “COPYLEFT”

The increasing control over digital technology and the uses it could be put to did not go entirely unnoticed. The narrowing spaces available to software developers and technologists as well as researchers prompted the formation of the Fee Software Foundation as an ideological alternative to the proprietary regime of copyrights.

Stallman, a programmer who was working for the Massachusetts Institute of Technology, saw the rise of proprietary software systems as a severe threat to freedom and creativity. In fact, Stallman argued, too much control over software through contract, trade secrets, or copyright impeded the development of the best possible software.

The software industry was born out of collaboration among the academy, the government, and private industry. And in the 1960s and 1970s, much of the culture of software reflected the openness and spirit of community and inquiry that exist within the academy. Recalling the initial stages of the development of the software industry, Stallman wrote:

“When I started working at the MIT Artificial Intelligence Lab in 1971, I became part of a software-sharing community that had existed for many years. Sharing of software was not limited to our particular community; it is as old as computers, just as sharing of recipes is as old as cooking. But we did it more than most.”

But once the industry outgrew its own incubators, a different, conflicting value infected its practices. What was once public, shared, collaborative, and experimental became secret, proprietary, and jealously guarded.

In the 1960s and 1970s, only computer programmers used computers widely. Software companies (which were more often than not also hardware companies such as AT&T and 113M) released the source code with their software so that programmers could alter and customize it to their needs. Source code is the set of instructions that human beings write in languages such as Fortran, Pascal, COBOL, and C++. Programmable computers have a feature called a “compiler” that translates source code into “machine language,” or object code. In general, only humans can read source code. Only machines can read object code. As the software industry blossomed in the 19805, companies realized there was commercial value in keeping the source code secret. If a buyer needed a particular feature, he or she had to order it from the software company. In addition, competing software companies would have a difficult time replicating the effects of the object code without access to the source code.

Before the rise 01 Windows, UNIX was one of the most common and powerful operating systems available. It was flexible, powerful, and stable. But it was hardly user-friendly. Only professionals dared to play with UNIX. When AT&T, which distributed UNIX (although it was developed in collaboration with universities, especially the University of California at Berkeley), bottled up its source code in the 19805, it angered many computer programmers who had considered themselves part 01 the UNIX team. Among these was Richard Stallman. Stallman grew frustrated that he could not customize UNIX to run particular printers and other peripherals. II he could only get a peek at the source code, it would take him minutes or hours to create a patch and make the system work better. Instead, every time UNIX users had a problem, they had to wait months or years for AT&T to roll out another version and fix it.’ Frustrated by the unwillingness of university computer administrators to stand up for their values in the face of increasing corporate control, Stallman left MIT and launched the Free Software Foundation in 1984 to promote the use of “free software,” programs unencumbered by proprietary restrictions on alterations, revisions, repairs, and distribution. Also in 1984, Stallman wrote the “GNU Manifesto.” GNU stands for “Gnu’s Not UNIX!” In the manifesto, Stallman wrote,

“I consider that the golden rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the usersandconquer them,making each useragree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement.”

Stallman went to great lengths to define the freedom he valued. It was not the “give it away for free” freedom that idealized the foolishly generous. Stallman said that “Free Software is a matter of liberty, not price. To understand this concept, you should think of ‘free speech: not ‘free beer.”’ Stallman outlined four specific freedoms central to the Free Software movement:

  • The freedom to run a program for any purpose.
  • The freedom to examine and adapt a program (and thus to get access to the source code-it would be “Open Source”).
  • The freedom to distribute copies.
  • The freedom to improve any program.”

Stallman started coding free programs that would work with UNIX. But he hoped for a better yet open operating system to emerge. In the 1990s, some other programmers generated LlNUX, the operating system Open Source champions needed to make free software important and powerful. The Free Software movement had grown to be a major force in the software world by the year 2000. But for this phenomenon to occur, Stallman had to come up with a way to ensure that no one company could comer the market on the work that Free Software programmers produced. If Stallman and his collaborators released their program without any copyright protection, declaring them in the public domain, then any company such as AT&T or Microsoft could bottle up that work by adding a few proprietary and highly protected features. So Instead, Stallman came up with an ingenious license that he called “Copyleft.”

Copyleft licenses require that anyone who copies or alters Free Software agree to release publicly all changes and improvements. These changes retain the Copyleft license. Thus the license self-perpetuates itself. It spreads the principle of openness and sharing wherever someone chooses to use it. This prevents any company from trying to release proprietary versions of free software. If a company were to release a “closed” or “unfree” version of the software, it would be Violating the original “GNU General Public License” (or GPL) that it agreed to in the first place. The code and the freedoms attached to it become inalienable. The proliferation of free software could not have occurred without this license, which uses the power of the copyright system to turn copyright inside out. Copyleft’s power and popularity have allowed many people to examine the foundations upon which copyright rests and ask whether its powers have actually worked to impede creativity. By the year 2000, the principles behind Free Software and Copyleft remained fringe views, even though the software they inspired and enabled had worked its way into the mainstream of the computer industry.

Controlling the Digits: Legislative Expansion of Control

As digital technology developed in America, specific constitutional and public policy recommendations created a climate of strict protection. The 1976 National Commission on New Technological Uses (CONTU) set the stage for the absorption of new technology under copyright law. Much had happened since then including the development of the Internet and in 1976, it could not have been possible to envision the types of changes that might occur in future.

Subsequent reports and recommendations of the American government offer significant insights into how the changing technological realities were perceived and addressed in legislative policy. Debora Halbert takes up a detailed analysis of the Office of Technology report presented in 1984 and the NII findings in the 199s, tracing the constitutional response to new technology till its fruition in the form of the Copyright Protection Act in 1995. A comparison of the reports makes clear that between 1985 and 1994 a desire to increase control over intellectual property developed.

1985 Office of Technology (OTA) Report

The office of Technology (OTA) wrote an extensive report in 1985 detailing the impact of technology on intellectual property. This report began by describing intellectual property as an outgrowth of 18th century technology and also defined IP as dynamic in nature, in effect, sifting in response to social, cultural and political circumstances. It
focussed specifically on authorship as defined by the emerging technological changes. Unlike the CONTU (1976) report that clearly designated computers as objects, rejecting that they could have a claim in authorship, the OTA report looked at the ability of technology to produce new forms of authorship.

The OTA report left room for the claim that authors/creators may not be as distinct from the users as originally thought. In case of interactive computer technology, the user became the author because the program was designed to blend the author and user’s ideas. The computer was seen as a tool that could blend these distinctions into new possibilities.

The recommendations of the OTA group however, did not take into account the full potential of the technological landscape. They concluded that the Congress should either expand the existing agencies of create a new central agency to deal with intellectual property issues. This agency could monitor the changes in technology and assess how the traditional copyright doctrine might be applied to the emerging scenario. The report also delved into the need for public support to the enforcement of new legislation and advocated for public education as crucial towards that end.

The subcommittee on Patents, Copyrights and Trademarks joined together with the subcommittee on Courts, Civil Liberties and the Administration of Justice in April 1986 to hear the final report of the OTA on Intellectual Property Rights in an age of Electronics and Information. D. Linda Garcia spoke about the far reaching changes brought about by the new technologies and argued that the Congress must consider who would benefit from the elaboration of an intellectual property system designed around new technologies. New technology problematises among other things, authorship, private use, intangible works, educational use and ethical dimensions of intellectual property use.

Some legal scholars disagreed with the conclusions of the report arguing that the intellectual property system is flexible enough to adapt to the new technologies (one of the conclusions of CONTU). At the heart of Goldstein’s disagreement with the report was the issue of authorship.

The report stated that, “copyright law, based on originality of ideas and individual authorship, may become too unwieldy to administer when works involve many authors, worldwide collaboration and dynamically changing materials. The promise that OTA saw in this potential is clear from the introductory paragraph of Garcia’s presentation:

“Our report is a product of the times; it is a jointly authored work, which has benefited from the collaboration, comments and review of over three hundred people. These contributions have come from all over the country, and they represent a wide variety of perspectives.”

Instead of mentioning proprietary authorship of the report, embraced the possibility of collaborative work. Goldstein however, perceived this as a problem and confirmed to a “sovereignty impulse.” His response to the new technology was to use it to be able to monitor and regulate all uses of copyrighted works so as to be able to allocate royalties.

The OTA report remained inconclusive on many counts and ultimately recommended that a reserved attitude towards technology be adopted as much would change over the next decade. Its ambiguousness on the protection of intellectual products made it unpopular in both legal and business circles. With the expansion of the information economy, information once shared as a free resource came to be treated like a consumer item. The governmental recommendations for the new National Information Infrastructure (NII) and intellectual property were a further step towards stronger protection.

National Information Infrastructure

President Bill Clinton and Vice President Al Gore made the construction of the National Information Infrastructure (NII) a priority for their administration during the 1990’s. The NII task force helped redefine copyrights for the Internet. The NII copyright protection act of 1995 incorporated some of the task force’s recommendations into new legislation designed to control the Internet. The preliminary draft of the Report of the Working Group on Intellectual Property Rights (the green paper) was finalised in September 1995 with the publication of the final report called the white paper. In the making of this report, the group clearly sided with traditional copyright interests and ignored
the potential for new types of information exchange.

The Green Paper faced strong criticism for virtually eliminating fair use in the electronic environment and for over regulating the use of copyrighted information on the Internet. The White paper backed away from some of the earlier reports more controversial proposals but generally restated its findings. It proposed some key definitions and findings.

  1. It defined ‘transmission” as a form of distribution because of the ease with which copyrighted works could move through the Internet. The distribution rights belonged to the owners of the copyright and such a definition of transmission would make electronic exchange subject to copyright rules and owner approval. It significantly expanded owner’s control over the copyrighted work and made loaning a book a copyright violation. The definition of “public” was also reinterpreted in the report. The task force described public viewing as any user browsing a copyrighted document. Such definitions narrowly defined public rights and made virtually any use of the Internet a copyright violation.
  2. Another aspect to be considered was the position of the report on the first sale doctrine. The first sale doctrine traditionally gave the purchaser of a copyrighted work control over its future use unless the rights were specifically retained. According to the Green paper the first sale right would not apply to the electronic transmission because a copy of the work remained with the original owner. The White Paper argued that a copyright violation takes place if a copy remains with the original owner. In that, it expanded the copyright control over not just access but also the consumption of the work by the end user.
  3. The third important aspect of this report was the approach to fair use and how it applied to the Internet. The Green Paper limited the concept of fair use in an NII setting, essentially abolishing it as a meaningful concept. Public testimony had an impact on such treatment of fair use. The White Paper convened a conference on fair use responsible for drafting fair use guidelines for the Internet. The working group continued to adhere to its beliefs in strong protections against overtly broad fair use guidelines. The conference on fair use was unable to reach agreement on what types of guidelines should be used when dealing with the Internet. The summary of the conference suggested that only “end run” was possible where users pay for everything they use. The question then was how much the users must pay and not what constituted fair use.
  4. Another aspect of the report was the liability of online service providers. The White Paper refused to limit the liability of the service providers and argued that they were in a better position to police traffic on their networks than copyright owners.

The paper referred to fair use and other users’ rights as a “tax” on copyright holders. It overstated the rights of copyright owners distorting the balance between private incentive and public good. According to Halbert, it succeeded in bringing a more powerful version of copyright to the Internet than had existed to protect traditional copyrighted works. In November 1995, a Joint Hearing of the Courts and Intellectual Property Subcommittee of the House Judiciary Committee and Senate Judiciary Committee met to discuss the copyright bill introduced simultaneously in the House and in the Senate. The resulting legislation was the NII Copyright Protection Act of 1995.

The rhetoric of copyright often refers to the balance between the producer’s rights and public interest. It is one of the mainstays of the copyright story that public interest is served by our system of copyright. The new definition of public interest supported by the NII appeared to emphasize the creation of jobs through intellectual property industries and not the promotion of arts and sciences. The task force viewed users as consumers who would, in the absence of proprietary systems stop consuming intellectual products and not as citizens with rights to share information and ideas freely. The White Paper paid no attention to the public interest concerns of the copyright system.

WIPO and the Berne Convention

Historically, the European countries had dominated the Berne treaty process, producing proposals opposed by American copyright owners, the United States, a relative newcomer to Berne, took this opportunity to set the international copyright agenda. Clinton Administration placed proposals that were virtually identical to those in the White Paper on the World Intellectual Property Organization’s agenda legislation for revising the Berne Convention.
American representatives attempted to sidestep the lacuna in American legislation and the loopholes left in the case laws by bringing in strict international legislation, which would bind the U.S. as well as other member countries to heighten the minimum standards under the Berne. The delegates in Geneva considered three treaties. They approved two of them and tabled the other for further consideration in pending meetings.

The WIPO Copyright Treaty provides that computer programs will be considered “protected as literary works”, thereby codifying the specific kind of protection granted to computer programs. It provides additional protections for copyright deemed necessary due to advances in information technology since the formation of previous copyright treaties before it. However, the protocol clearly considers copying software into Random Access Memory, or RAM, potentially liable copying. Representatives of telephone companies and Internet service providers explained to delegates that the proposed temporary copying provisions ignored “the reality of the digital world” in which copyrighted works by the millions are daily reproduced through temporary storage in the memory circuits of telecommunications equipment. Eventually, the diplomatic conference dropped Article 7 and replaced it with an equivocal “Agreed Statement” that the reproduction right fully applies in the digital context, and that digital storage of a copyrighted work constitutes reproduction, but it left open the question whether temporary copies also come within the reproduction right.

The anti-circumvention proposal was part of the U.S. submission to the December 1996 WIPO conference, where it fared somewhat better than the temporary copies proposal. Several delegations – most prominently from the African countries – objected that the proposal was overly broad and would bring too many innocent devices within the scope of the law. The Diplomatic Conference ultimately settled for a substantially watered- down provision requiring treaty parties to provide “adequate legal protection and effective legal remedies against the circumvention of effective technological measures.” The treaty’s ‘one size fits all’ standard to all signatory countries despite widely differing stages of economic development and knowledge industry has been widely criticised.

The second Berne treaty, the WIPO Performances and Phonograms Treaty, dealt with music. Through this treaty, U.S. copyright law for the first time adopted a codification of a composer’s “moral rights.” This was a direct response to the 1994 case of Campbell v. Acuff Rose where the US Supreme court held that parody was a part of the fair use doctrine. In this case 2Live Crew created a parody of Roy Orbison’s song, Oh pretty woman, and when sued for copyright infringement made claimed a fair use exception. The court reasoned that their rendition of the song had ‘transformative authorship’, and could be considered an original by itself since it involved creativity, labour etc. Through the WIPO Performances and Phonograms Treaty a composer or even a performer can claim a right to be identified as the performer and can prevent any “distortion, mutilation or other modification of his performances that would be prejudicial to his reputation.” In other words, performers would have veto power over parodies of their work.

The third treaty that was considered would have created a whole new area of “intellectual property” law by protected databases from piracy and unauthorized use. Basing itself on the “sweat of the brow” principal, the proposal sought to provide a 25 year term of protection for databases that resulted from the investment of resources. It also proposed that the database protection be renewed each time new data was added.

Opposing the database protection measures were representatives of underdeveloped nations who are concerned by the concentration of database access in western nations, scientists concerned about easy and inexpensive access to data, and, of course, librarians. It could limit scientific exploration, debate on public policy and render information a
resource available only to wealthy people in wealthy nations. The more dangerous aspect of the proposal being tabled was that databases were subject to renewed protection term of 25 years each time new data would be added or commentaries renewed. In effect, this would subject the database to perpetual protection creating a strict monopoly over the information contained therein. Due to a severe negative potential, the proposal was not passed by the Geneva conference.

An overview of the proposals and discussions of the American and international committees highlights the tendency for strengthening copyright regimes globally. Post its entry into the Berne convention, discussions on copyright policy have been dominated by the U.S. The move as been towards international harmonisation of laws governing technological transfers and cultural exchanges through the new media. The Acts that were enacted during the 1980s and 90s also reflect the tendency to bring digital technology within the purview of copyright law. It would be useful to look at some of the Acts passed in the U.S. during this period;

Record and Rental legislation: even before the availability of home digital recording technology, the sound recording industry was concerned about the dangers of home copying on the analog recorders. In 1984, the industry persuaded Congress to amend the first sale doctrine – which affords the purchaser of an authorised copy of the copyrighted work the freedom to do as they wish with the copy – to prohibit the rental of sound recordings. The amendment also covered computer software, which had been brought under the ambit of copyright law in 1976.

The Record Rental Amendment of 1984 and the Computer Software Rental Amendments Act of 1990 both amended Section 109 to prevent all owners of software copies or phonorecords to distribute said copies through the acts of rental, lease, or lending, or by any other act or practice in the nature of rental, lease, or lending unless authorized by the owners of the copyright, with an exemption for non-profit educational institutions and non-profit libraries.

Audio Home Recording Act 1992: already threatened by analog recording, digital technology posed a greater concern for the recording industry. In the 1980s just after the release of record labels’ catalogues in unencrypted digital formats (CDs), consumer electronics companies sought to introduce a host of new products that would enable the consumers to make digital copies of audio recordings. DAT was available as early as 1987 in Japan and Europe, but device manufacturers delayed introducing the format to the United States in the face of opposition from the recording industry.

Despite their strong playing hand, the recording industry failed to convince consumer electronics companies to voluntarily adopt copy restriction technology. The recording industry concurrently sought a legislative solution to the perceived threat posed by perfect multi-generation copies, introducing legislation mandating that device makers incorporate copy protection technology as early as 1987. These efforts were defeated by the consumer electronics industry along with songwriters and music publishers, who rejected any solution that did not compensate copyright owners for lost sales due to home taping. A year later the songwriter Sammy Cahn and four music publishers, unhappy with the absence of a royalties provision in the Athens agreement, filed a class action copyright infringement suit against Sony. The plaintiffs sought declaratory and injunctive relief that would have prevented the manufacture, importation or distribution of DAT recorders or media in the United States. The suit brought Sony to heel. In July 1991, Sony, as part of larger agreement between the recording industry and consumer electronics makers, agreed to support legislation creating a royalty scheme for digital media. In exchange, Cahn and the publishers agreed to drop the suit.

With all the major stakeholders satisfied, the bill easily passed both houses of Congress. President George H. W. Bush signed the AHRA into law in 1992.

For the first time in the history of copyrights, the government imposed a technological design constraint on the manufacture of copying devices. The technology allowed users to make copies directly from the compact disk, but not from digital tapes made using the technological control. It also prohibited the importation, manufacture and distribution of copying devices that did not incorporate this technological control.

As a means to compensate copyright owners for the copying that could result from these new technologies, the Act required that manufacturers and importers of digital audio recording equipment and blank tapes, discs and other storage media to pay a percentage of their transfer price (2% for digital audio devices and 3% for storage media) into a royalty pool which is distributed to owners of musical compositions (one third) and sound recordings (two thirds) based of prior year sales and air time. The Register of Copyright administers this mechanism, with provisions for arbitration of disputes.

Digital Performance Rights in Sound Recording Act 1995: When the Internet opened up a new distribution channel for sound recordings – webcasting – record labels seized the opportunity to establish a performance right, even if only with respect to digital audio performances. They voiced great concern that this new medium could seriously disrupt the market for sound recordings. If consumers could access and possibly even download high quality recordings of their favourite songs over the Internet whenever they desired, then there would be little demand for the retail record.

Interestingly, the prospect of webcasting and other online subscription services united traditional broadcasters and the sound recording industry in support of the Digital Performance Right. Recording artists and record labels could at least partially rectify the omission of a public performance right in the sound recordings white traditional broadcasters could impose new licensing requirements (and costs) upon new competitors. Since this new industry was not yet well developed, it lacked the political clout to block this new right, although owners of musical composition copyrights (and their performing rights societies, ASCAP, BMI and SESAC), which did not wish to empower another set of music licensing claimants succeeded in constraining the reach of this new right along a number of dimensions. Furthermore, Congress also sought to ensure that the new right would not unduly interfere with the development of new digital transmission business models.

The ultimate compromise amended sections 106 and 114 of the Copyright Act to establish as exclusive right to perform sound recordings “publicly by means of a digital audio transmission.” The practical effect of this provision is that record companies who hold a right in the sound recording can demand a royalty on digital “performances,” which include downloading, uploading, and streaming of the digital transmissions.

No Electronic Theft Act 1997: Congress enacted the No Electronic Theft Act in order to strengthen criminal prosecution and penalties against those who distribute copyrighted works without authorisation. It specifically responds to the ruling in United States v. David LaMacchia, in which the court held that a computer bulletin board operator providing used with unauthorised copies of copyrighted software without charge could not be prosecuted under federal copyright law because the government could not show that the operator benefited financially from copyright infringement. The NET Act closed the loophole by criminalising various international acts of copyright infringement without regard to whether the defendant received any financial benefit from such acts. It also stiffened the criminal penalties applicable to copyright infringement committed trough electronic means. A person found guilty under this provision could receive three years in prison for the first offence and be forced to pay a substantial fine even for illegally distributing sound recordings valued far less.

Digital Milinium Copyright Act: The Digital Millennium Copyright Act (DMCA) was enacted in 1998 to implement two 1996 treaties of the World Intellectual Property Organization (WIPO). It criminalized production and dissemination of technology, devices, or services intended to circumvent measures that control access to copyrighted works. It also criminalized the act of circumventing an access control, whether or not there is actual infringement of copyright itself. In addition, the DMCA heightened the penalties for copyright infringement on the Internet. Passed on October 12, 1998 by a unanimous vote in the U.S. Senate and signed into law by President Bill Clinton on October 28, 1998, the DMCA amended Title 17 of the United States Code to extend the reach of copyright, while limiting the liability of the providers of on-line services for copyright infringement by their users.

One of the most discussed additions to the copyright law in recent times, the DMCA and its provisions has been the subject of much debate. Specifically, its enforcement of anti circumvention legislation and the regulation of copyrighted works through digital rights management measures that regulate access and use of specific works.

Rights and Wrongs: analysing the legal discourse

One of the strongest proponents of the expansion of copyright law into digital technology was Stanford Law Professor Paul Goldstein. In his 1994 book Copyright’s Highway: The Law and Lore of Copyright from Gutenberg to the Celestial Jukebox, Goldstein outlined an optimistic vision of the digital moment and its potential for both, producers and consumers. In his book, he further elaborates the views he expressed during the presentation of the OTA report as a member of the OTA advisory Board. Highlighting the technological possibility of precise and constant monitoring of the use of copyrighted works made possible by digital technology, he saw on the horizon a celestial jukebox through which infinite gigabytes of information could be streamed directly into the user’s house with minimal transaction and storage costs. He was speaking, of course, of the vast potential of the Internet, which he applied, to the traditional use of copyright in the exchange of intellectual goods.

Taking from Clark’s ideas of using technology to monitor all uses of cultural works, Goldstein elaborates on the idea of the Celestial Jukebox expressed in the Lehman Group’s White Paper. Goldstein saw three vestiges of traditional copyright policy impeding his pay-per-view utopia: fair use; private, non-commercial, non-infringing copying; and the idea-expression dichotomy. Goldstein’s reading of the law and new technologies was informed by a strict economic analysis on the copyright system. Under the system, broad appeals to values beyond material concerns-culture, beauty, dignity, democracy- invite inefficiency into social, political, and economic systems. These extra-economic principles are not bad ideas per se, according to Law and Economics concepts, but proposals that appeal to them should be justified by tests of their utility. Within this school of thought, fair use and home copying have no inherent educational or democratic value. Fair use is not a good idea per se, but only a necessary flaw in what might otherwise be a perfectly efficient and rational market for cultural goods. Fair use exists simply because the “transaction costs” of restricting copying in the home and schools would be too high to justify enforcement.

If Home Box Office or its parent Time Warner had to negotiate with a consumer every time she made a videotape copy of The Sopranos for later viewing, the consumer would probably not bother recording the show. Perhaps out of frustration she would decide not to watch the show. The transaction costs of time, money, and stress would not justify the small reward the consumer gets from home recording or the small return the company would get from charging each time the consumer recorded the show. Similarly, the transaction costs of regulating every time a teacher makes a copy of a newspaper article for thirty students would be too high to justify the hassle of extracting permission and payment. Imposing high transaction costs would only chill this use. Therefore, the Conservative Law and Economics theorists argue, society benefits from fair use and private, non-commercial domestic copying only because producers can’t exact transaction costs easily and efficiently. They can’t monitor every use. They can’t send a bill through the mail and expect timely payment every time someone records a show. But Goldstein argued that the digital moment and the potential of the Celestial Jukebox reduces transaction costs to just pennies per use.

Goldstien’s proposal of extending copyright to cyberspace gives absolute control to the owner at the cost of the public good that is meant to be the central tenet of the law. Under the strict and linear economic analysis, fair use was unnecessary, as it did not fit into the ‘incentive for creativity’ system. The owners could deny permission for specific uses such as unfavorable reviews and expression of dissenting viewpoints, heightening the potential for corporate censorship. This vision of the copyright system required a strict regime of monitoring and royalty payment and Goldstein held that the existing system expanded to the digital environment was adequate for treating intellectual and cultural goods in the digital world.

While Goldstein holds the current laws adequate for the changing scenario, Andres Guadamuz Gonzales, predicts the shift from copyrights to other systems to ensure the proprietary protection of intellectual works. He considers the current legislation inadequate to control piracy on the Internet and advocates a long turn structural change in how we perceive intellectual property theft. Equating intellectual property with real tangible property, Gonzalez supports the extension of the same norms of exclusionary ownership to ideas on the Internet. The DMCA, according to Gonzalez, is a “positive step in regulating property rights in cyberspace, but it is not yet clear how effective it will be.”

Reiterating the concerns of the culture industry, Gonzalez highlights the dangerous potential for the piracy of protected works on the Internet. The connectivity of the Internet escalates the threat of piracy making even personal uses liable to cause huge losses to the industry through a ripple effect of copying and posting online. The difficulty of regulating remote networks and the inapplicability of the law across geographical terrain pose particular problems for Gonzalez. Towards this end he proposes multiple fences that would seek to limit piracy to the extent possible. As proposed by the government committees and enacted by the DMCA, multi layered protection can be used to deter people from pirating cultural works. This includes legislative fences by making laws more stringent and putting more control in the hands of intellectual property owners, building fences in courts through acting and aggressive action against infringement and removing gaps and loopholes by making registration more precise and increasing ISP liability, as well as protection through technological measures, encryption and using electronic signatures and devices to control the specific uses that can be made of the works. Ultimately, Gonzalez considers the system of protections granted by copyrights inadequate when faces with the problematic nature of the Internet that eschews regulation and control. “More protection will probably make it more difficult to people to infringe copyright online, but it will not stop it from happening. The Internet is too vast; the possibilities for infringement are too varied.”

Subscribing to the traditional narrative of copyright, Gonzalez sees the Internet and digital technology as a dangerous space that harbours escalating piracy and the theft of other people’s works. This narrative collapses the threats of infringement and the threats of technology into a single linear strand obfuscating the possibilities for radical change in the system of creativity and exchange in new technological realms. Such an analysis takes from an economic analysis that posits a linear relationship between incentive and reward on which the current copyright regime stands.

Jessica Littman offers a radically different reading of the trends in copyright legislation and the expansion of law into new technology. She points to the changing metaphors for copyright in her analysis of the history of legislation around intellectual property. She contends that the primary model of copyright has changed drastically from its inception to its expansion into the digital arena. At the outset it was hinged on the notion of “quid pro quo.” Or exchange. “The public granted authors limited exclusive rights in exchange for immediate public dissemination of the work and the eventual dedication of the work in its entirety to the Public domain.” That model gradually evolved into one of a bargain wherein the public granted limited rights to the authors as a means to advance public interest. The balance between the protection and the public interest was the central tenet of copyright. The limited protection model offered the owners only limited rights over particular uses of their works. This balance has steadily tipped in favour of the owners of copyrights and the model has been replaced with one that emerges from an economic analysis of law, characterising copyright as a system of incentives. This system posits a direct relationship between the extent of copyright protection and the incentive for creation of new works. A direct corollary of this metaphor would be that the best practice is for copyrights to last forever, a call that has been often made.

The dominant metaphor of copyrights today then, is neither of an exchange nor of a bargain. It is one of control where the owners of copyrights decide how, where and under what conditions access tot heir work will be allowed. The question now, is of the rights of the property owner to protect what is rightfully hers. (That allows us to skip over the question of what is exactly that ought to be rightfully hers.) And the current metaphor, according to Littman is reflected both in recent copyright amendments now on the books and in the debate over that those laws mean and whether they go too far. The gradual erosion of the first sale doctrine, the fair use principal and the remarkable expansion of piracy to include any “unlicensed” activity are examples of the expansion of control.

Littman contends that historically, Congress has relied upon the affected industry to shape copyright legislation. It is the tussle of interests between various industry players such as the composers and piano roll manufacturers, the authors and the motion picture industry that has determined the provisions of the law. The congress is supposed to ensure public interest but has mostly been happy to go along with whatever the other represented parties can agree upon; “to say that the affected industries represented diverse and opposing interests is not to say that all interests got represented.” Eventually, it led to the formation of statutes that granted the broadest possible rights with exceedingly narrow exemptions. Looking specifically at the 1976 Act, it becomes clear that the narrow specific language “was bound to fail the future in predictable ways.”

The laws that emerge from inter industry negotiations are designed to solve the current problems and divisions of market share with no vision for the future and what technological innovation may bring up. The Audio Home Recording Act, for instance emerged out of negotiations between copyright owners and the manufacturers of digital audio recording equipment. The recorders and audio equipment such as tapes were taxed and the resulting benefits distributed among composers, music publishers, record companies and artists according to complicated and vague formula. The required technical fix disabled consumer digital recording systems and was perhaps one of the reasons digital recorders and tapes failed to sell well. Computers soon overtook the market and computer discs which were excluded from the specifically defined devices included in the Act.

The real constituency of the copyright office have been the owners of copyrights and not the public, giving the laws a specific bent and agenda. The White Paper and Green Paper produced by the NII commission proposed the enhancement of owners right was without question in the public interest because it was a necessary first step in the creation of an information superhighway. Laws, says Littman, must be designed from the vantage point of a “hypothetical benevolent despot with the goal of promoting new technology” instead of the point of view of current market leaders whose only aim is to maintain their hegemony and market position. Such a position would recognise that copyright shelters and exemptions have, historically, encouraged rapid encouragement and growth of new media expression. By freeing content providers from well-established rules and proprietary practices, we can allow new players to enter the game and displace monopoly control. Within the “bargain” that copyright decides, the balance should be tipped in the favour of the users by asking what public good would come from new laws instead of asking how they would benefit the owners of copyrights. This would mean making the right to make copies non-fundamental to the bundle of rights that copyright encompasses. One of the radical changes brought by digital technology is that of making reproduction much more economical (almost free) and accessible than in the era of print.

For the average user, sharing an article or a music file online is the Internet version of lending a book or a CD to a friend. The common habits of information sharing and exchange of ideas are easier to transfer to the Internet that the exclusionary discourse of private property. The bent of the current constitutional provisions as well as enforcement, however, is towards a strict control of copyrighted works to maximise the protection given to copyright owners. Looking at some cases of copyright enforcement in the digital arena, the conflict between law and technology becomes even more apparent. In the process, there also appears a clear divide between public opinion and the interests of copyright owners.

Securing the Digital Millennium

At a meeting of the Copy South research group in Kerala in 2008, Debora J. Halbert spoke about a disturbing new trend in law enforcement in the U.S. Instead of looking at nations as pirates or hubs of piracy, the copyright industries and government machinery was beginning to look at individual people as pirates.

Use of copyrighted works by private individuals had always been considered fair use and enforcement agencies had never questioned that use. The transaction costs and effort involved in such action combined with the marginal returns made it economically unfeasible. As Goldstein predicted, however, digital technology changed this practice due to two main reasons. Firstly, technological advancement made it possible to track information flows on the internet, in some case even back to the specific user involved in the exchange. Second and perhaps more important was the fact that digital technology heightened the threat posed by private use by connecting millions of individual users to each other and allowing them to share information. The fears of the culture industry were expressed in the White Paper; “ just one unauthorized uploading could have devastating effects on the market for the work.” As the culture industry aggressively protects its markets, we see copyright legislation going back to its early beginnings in censorship or ideas and expressions.

Recent cases of copyright enforcement highlight the trend that Halbert indicated, exposing the rift between copyright law, free speech and privacy. They offer insights into the use of digital technology and the internet enabled cultural exchange by the common public and the repeated attempts to curb this freedom.

Secure Digital Music Initiative (SDMI)

On December 15, 1998, less than two months after passage of the DMCA, the major record companies announced the Secure Digital Music Initiative, a joint effort with leading Internet, computer, and home consumer electronics companies to design a standard technology that would block the unauthorized use of digitally recorded music.

SDMI’s initial efforts centered on the design of a common security architecture and specifications that would ensure compatibility across equipment from different manufacturers. The collaborators agreed that SDMI’s principal security technology would be watermarking, a technique that entails the insertion into recorded music of a faint background sound that, though undetectable by a listener, is virtually indelible-much like a watermark on stationery. A CD player engineered to detect the watermark will neither play nor record the watermarked work other than on specified conditions.

Once they developed an effective watermarking technique, in a canny effort to test it, the group’s executive director, Leonardo Chiariglione issued a “Hack SDMI” Challenge, offering a $10,000 prize to be shared by anyone who could liberate four pre-selected musical samles from their watermarks. The participant would submit its clean copy to an “oracle” posted on the SDMI website who would respond by email indicating acceptance, if the watermark had been removed without degrading sound quality, or rejection, if it had not.

Ed Felten, an associate professor of computer science at Princeton University, decided to accept the challenge. Felten’s had been researching security and privacy issues in consumer electronics and computer software and welcomed the Challenge as an opportunity to investigate watermarking techniques without fear of industry reprisal. Felten was joined by’ another Princeton professor and three graduate students, as well as by three researchers at Rice University and one from Xerox Parc in Palo Alto, in attacking the musical samples.

Despite being given very little or no information about the watermarking technologies other than the audio samples, and having only three weeks to work with them, Felten and his team managed to modify the files sufficiently that SDMI’s automated judging system declared the watermark removed.

SDMI did not accept that Felten had successfully broken the watermark according to the rules of the contest, noting that there was a requirement that the files lose no sound quality. They claimed that the automated judging result was inconclusive as a submission, which simply wiped all the sounds off the file, would have successfully removed the watermark, but would not meet the quality requirement. Also, the hacks had to be tested for replicability; it would mean little if they could not be applied to songs other than the ones used in the Challenge. Felton concluded that only peer review and publication of his results could sustain his claim. He had already prepared a paper describing the results, and in late February 2001 it was accepted for presentation at an Information Hiding Workshop-a popular venue for researchers studying technologies like watermarking-in Pittsburgh. As the April 26 conference approached, Felten feared that, even though he had declined to sign the Challenge’s confidentiality agreement thus forgoing the $10,000 cash prize-he might still face liability under the DMCA if he published his results. If exposed, SDMI’s investment in the technology for creating the watermark would go waste and they could not be expected to not react to such a public disclosure.

On April 9, Felten received a summons from his department chair, who had been copied on a letter to Felten from Matt Oppenheim. The letter noted Felten’s plans to present a paper at the Information Hiding Workshop later in the month and urged him “to reconsider your intentions and to refrain from any public disclosure of confidential information derived from the Challenge and instead engage SDMI in a constructive dialogue on how the academic aspects of your research can be shared without jeopardizing the commercial interests of the owners of the various technologies.” It also clarified the possibility of enforcement actions under federal law, including the DMCA.”A copy of the letter also went to the Workshop’s Program Chair.

At the conference instead of his paper, Felten instead read a short statement- Stating that the RIAA, SDMI, and Verance had threatened to take legal action on the disclosure of the protection techniques for research purposes. This stirred concern at the RIAA; a press release from the organization the following week stated that SDMI “does not-nor did it ever-intend to bring any legal action against Professor Felten or his co-authors.”

On June 6, Felten and his co-authors, represented by the Electronic Frontier Foundation, filed a lawsuit in federal district court in Trenton, New Jersey, against RIAA, SDMI, Verance, and U.S. Attorney General John Ashcroft challenging the DMCA as an unconstitutional restraint of free speech. Joining Felten and his co-authors as a plaintiff in the case was the USENIX Association, an organization of computing professionals that had accepted Felten’s SDMI Challenge paper for presentation at a symposium. The defendants responded that, as reflected in the earlier RIAA press release and later communications directly to the plaintiffs, they had no intention to sue plaintiffs in the past or in the future, so there was no live case or controversy, required for a federal court to take jurisdiction. On November 28, District Judge Garrett E. Brown ruled for the defendants, dismissing the action. “The irony,” Judge Brown noted, “is that the defendants, having said we’re not going to sue you, the plaintiffs decided apparently to catalyze this action by bringing a lawsuit themselves.” Delivering his opinion orally, Brown observed: “Plaintiffs liken themselves to modem Galileos persecuted by the authorities. I fear that a more apt analogy would be to modem day Don Quixotes feeling threatened by windmills which they perceive as giants.” The Electronic Frontier Foundation’s Legal Director, Cindy Cohn, tried to put a positive spin on the result. “The statements by the government and the recording industry indicate that they now recognize they can’t use the DMCA to squelch science. If they are as good as their word, science can continue unabated. Should they backslide, EFF will be there.”

DeCSS

The DMCA was a natural target for free speech claims, for it lacks the many safety valves-such as fair use and the idea-expression distinction-that have historically immunized copyright from First Amendment attack. On the same day that Judge Brown dismissed Ed Felten’s First Amendment challenge in Trenton, the Second Circuit Court of Appeals in Manhattan affirmed a lower court decision rejecting another First Amendment attack on the DMCA. The case, Universal City Studios, Inc. v. Corley, pitted the CSS (Content Scramble System)-used to protect motion pictures recorded on DVDs from being played or copied on unlicensed DVD players-against DeCSS, a computer program that disencrypted CSS-coded DVDs so they could be freely played-in-and copied–on unlicensed equipment. (A Norwegian
teenager, Jon Johansen, and two Internet cronies had developed DeCSS by dismantling a licensed DVD player to ascertain the CSS encryption keys.)

Plaintiffs, the eight major motion picture studios, charged that defendant, Eric Corley, had violated the DMCA by publishing downloadable computer code for DeCSS on the website for his magazine, 2600: The Hacker Quarterly, as well as by providing links to DeCSS websites. Defendants answered that their conduct was protected under copyright law’s fair use defense and the First Amendment’s free speech guarantee. Federal District Judge Lewis A. Kaplan had already ruled against the defendants in an exhaustive opinion. Affirming Judge Kaplan’s decision, the Court of Appeals observed that although communication “does not lose constitutional protection as ’speech’ simply because it is expressed in the language of computer code,” the functionality of code necessarily constrains the scope of its protection under the First Amendment. “Just as the realities of what any computer code can accomplish must inform the scope of its constitutional protection, so the capacity of a decryption program like DeCSS to accomplish unauthorized-indeed, unlawful-access to materials in which the Plaintiffs have intellectual property rights must inform and limit the scope of its First Amendment protection.”

DeCSS was only the beginning of the motion picture studios’ concerns. Soon enough, advances in compression technology, cheaper and broader bandwidth, and falling prices for digital storage would make it possible for home users to exchange motion picture files with the same ease as they shared sound recordings. Indeed, as early as 2002, file-sharing services such as Morpheus were offering subscribers access not only to music files but to feature films, including current releases. The problem for the movie companies was not just illicit copies; it was the prospect that free copies of new releases would destroy the carefully timed progression from theatrical release to DVD sales, to home payper-view, to free television.

Napster

In 1999, a 19 year old college student named Shawn Fanning, created the Napster, a revolutionary system that allowed thousands- even millions of users to trade their music online. Within months of its release Napster had virtually become a social phenomenon and a massive commercial threat. It was one of the first and most popular peer to peer networking sites although not fully peer to peer as it used a central server to to maintain lists of connected systems and the files they provided, while actual transactions were conducted directly between machines.

Universities complained that Napster was consuming huge amounts of their bandwidth and the music industry proclaimed it as the most blatant sort of Piracy: Stealing. Napster’s facilitation of transfer of copyrighted material raised the ire of the Recording Industry Association of America (RIAA), which almost immediately — on December 7, 1999 — filed a lawsuit against the popular service. Soon, recording artists like Metallica, Dr. Dre and Madona joined in, angered by finding their own tracks available and being traded on Napster. In 2000, A&M records and several other recording companies sued Napster (A&M Records, Inc. v. Napster, Inc.) for contributory and vicarious copyright infringement under the US Digital Millennium Copyright Act (DMCA). The music industry made the following claims against Napster:

  1. That its users were directly infringing the plaintiff ’s copyright
  2. That Napster was liable for contributory infringement of the plaintiff ’s copyright
  3. That Napster was liable for vicarious infringement of the plaintiff ’s copyright.

After a failed appeal to the Ninth Circuit Court, an injunction was issued on March 5, 2001 ordering Napster to prevent the trading of copyrighted music on its network. In July 2001, Napster shut down its entire network in order to comply with the injunction.

Napster lost the case in the District Court and appealed to the U.S. Court of Appeals for the Ninth Circuit. Although the Ninth Circuit found that Napster was capable of commercially significant non-infringing uses, it affirmed the District Court’s decision. On remand, the District Court ordered Napster to monitor the activities of its network and to block access to infringing material when notified of that material’s location.

Napster was unable to do this, and so shut down its service in July 2001. Napster finally declared itself bankrupt in 2002 and sold its assets. It had already been offline since the previous year owing to the effect of the court rulings
After a $2.43 million takeover offer by the Private Media Group, an adult entertainment company, Napster’s brand and logos were acquired at bankruptcy auction by the company Roxio, Inc. which used them to rebrand the pressplay music service as Napster 2.0.In September 2008, Napster was purchased by US electronics retailer Best Buy for $US 121 million. No one wanted government regulation of the music business. No one except the record companies wanted to cede power back to the recording companies and their established system of royalties.

Despite predictions of the doom of the recording industry owing to file sharing, there were those who felt just the opposite. The examples of bands like Radiohead and Dispatch who were not commercially successful and popular having become so after their music was leaked on the Napster server suggest the possibility that file sharing may actually lead to the popularization of music through peer reviews and comments and may create avenues for new players to enter a market fiercely dominated by the top selling celebrity artists.

Teaching Troubles

In 1999, Professor Horatio Potel who teaches philosophy at Universidad Nacional de Lanús in Beunos Aires set up a personal website to collect essays and other works of some well-known philosophers, starting with the German Friedrich Nietzsche and Martin Heidegger.

One of Potel’s most popular websites, www.jacquesderrida.com.ar focused on his favourite French philosopher, Algerian-born Jacques Derrida (1930-2004). The website had Spanish translations of many of the philosopher’s works, translated into Spanish as well as discussion forums, research results, biographies, images and the usual pieces of information typical of this type of online resource. Professor Potel wanted to make research material and years of his own work on the philosopher available to students who did not have access to many books or materials otherwise in Argentina.

On December 31, 2008, a criminal case was initiated against Potel after a complaint was lodged by the French publishing house Les Éditions de Minuit that has published only one of Derrida’s books in French. Minuit’s complaint was passed on to the French Embassy in Argentina and it became the basis of the Argentina Book Chamber’s legal action against Prof. Potel. He is not facing criminal charges of copyright infringement and if you access the site now, you will find a warning saying “This website has been taken down due to a legal action initiated by the Argentina Book Chamber.”

ccc-1

The utility of the website for educational purposes and as a tool for dissemination of research dialed to qualify it as fair use. The fact that the website did not hamper profits accruing to the publishers since the books were out of print also did not have a bearing as the works were still copyright protected and the website was technically in violation of the law. In this case, monopolistic impulse of the publishing industry lead to the demise of a valuable resourse for students who otherwise would not have quick and easy access to scholarly research for their educational use.

Demitry Sklyarov

On July 16, 2001, due to a complaint from Adobe Systems, a US company, that copy protection arrangements in its e-book file format were being violated by ElcomSoft’s product, Sklyarov was arrested after giving a presentation called “eBook’s Security – Theory and Practice” at the DEF CON convention in Las Vegas, Nevada. He was charged with distributing a product designed to circumvent copyright protection measures, under the terms of the Digital Millennium Copyright Act and arrested by the FBI as he was about to return to Moscow.

About 100 protesters marched on Adobe’s San Jose headquarters to demand his release from what would have been the first criminal prosecution under the DMCA. The day after his arrest several web sites and mailing lists were started to organize protests against his arrest, many of them under the slogan “Free Dmitry” or “Free Sklyarov”. The main point of these campaigns was that no DMCA violations were committed at DEF CON, and the DMCA does not apply in Russia, so Sklyarov was being arrested for something that was perfectly legal in his jurisdiction. A campaign to boycott Adobe products was also launched.

The company Sklyarov worked for in Moscow, Elcomsoft soon faced federal criminal charges in the USA (even though Digital Millenium Copyright Act is only a US law) after Sklyarov agreed to testify in exchange of immunity. Both the company as well as the programmer were charged of violating the DMCA by distributing a program that wilfully violated the act by allowing readers to make private copies of e-books. In December 2002, a jury found the company not guilty.

Pirate Bay

The Pirate Bay is a Swedish website that indexes and tracks BitTorrent (.torrent) files. It allows users to search for and download BitTorrent files (torrents), small files that contain metadata necessary to download the data files from other users. The torrents are organized into categories: “Audio”, “Video”, “Applications”, “Games”, “Other” and “Porn”. Users can register free by entering their email address. The website allows them to upload torrents as well as comment of existing torrents. Downloading data files from other users is facilitated by the BitTorrent trackers that also run on The Pirate Bay servers.

The website features a relatively efficient browse function which enables users to see what is available in broad categories like Audio, Video, and Games as well as more specific categories like Audio books, High resolution Movies, and Comics. The contents of a category can then be sorted by file name, number of seeds or leechers, dates posted, etc.
It bills itself as “the world’s largest Bit Torrent tracker” and is ranked as the 106th most popular website by Alexa Internet. The website is primarily funded with advertisements shown next to torrent listings.

Initially established in November 2003 by the Swedish anti-copyright organization Piratbyrån (The Piracy Bureau), it had been operating as a separate organization since October 2004. The website is currently run by Gottfrid Svartholm (anakata) and Fredrik Neij (TiAMO).

On 31 May 2006, the website’s servers, located in Stockholm, were raided by Swedish police, causing it to go offline for three days. The Los Angeles Times, described Pirate Bay as “one of the world’s largest facilitators of illegal downloading”, and “the most visible member of a burgeoning international anti-copyright—or pro-piracy—movement”.
The trial started on 16 February 2009 in the district court (tingsrätt) of Stockholm, Sweden. The hearings ended on 3 March 2009 and the verdict was announced on Friday 17 April 2009: Peter Sunde, Fredrik Neij, Gottfrid Svartholm and Carl Lundström were all found guilty and sentenced to serve one year in prison and pay a fine of 30 million SEK (app. €2.7 million or USD 3.5 million). The court found that the defendants were all guilty of accessory to crime against copyright law, strengthened by the commercial and organized nature of the activity. The court, however, never presented its corpus delicti (that is, it never attempted to prove that a crime was committed, but it succeeded in proving that someone was an accessory to that crime). Prosecutor Håkan Roswall cited in his closing arguments a Supreme Court of Sweden opinion that a person holding the jacket of someone committing battery can be held responsible for the battery. In its verdict the court stated that “responsibility for assistance can strike someone who has only insignificantly assisted in the principal crime”, referring to a supreme court precedent where an accountant was sentenced for accessory to crime even though his actions were not criminal per se. The court rejected the charge of preparation to crime against copyright law. All the defendants have appealed the verdict.

ccc-2.2

On 18th February, 2009 the Norwegian socialist party Red began a global campaign in support of The Pirate Bay and ‘filesharers’ worldwide that lasted until 1st May. Through the website filesharer.org, filesharers are encouraged to upload their photographs, as “mugshots”, to “let the music and movie industry know who the file-sharers are.” The
site encourages participation urging people to “Upload a picture of yourself and show them what a criminal looks like!”. Red politician Elin Volder Rutle is the initiator of the campaign and she states to the media that “If the guys behind Pirate Bay are criminals, then so am I, and so are most other Norwegians.” The campaign was timed to coincide with the trial against the founders of The Pirate Bay which began on 16 February. The Pirate Bay has also been shown support from people all over the world, with more than 200.000 (as of June 2009) members currently joined to support the group on facebook.

Jamie Thomas Rasset

In 2007 a jury slapped the single mother with a $222,000 verdict in her case against the RIAA, which she later appealed. Capitol v. Thomas (previously named Virgin v. Thomas) was the first file-sharing copyright infringement lawsuit brought by major record labels to be tried before a jury. The defendant, Jammie Thomas, was found liable in a 2007 trial for infringing 24 songs and ordered to pay $222,000 in statutory damages. The court later granted her motion for a new trial because of an error in its jury instructions. In the second trial, in 2009, a jury again found against Thomas, this time awarding $1,920,000 in statutory damages.

In the trial, the plaintiffs alleged that on February 21, 2005, Jammie Thomas shared a total of 1,702 tracks online. The plaintiffs, however, sought relief for only 24 of these. Thomas’ legal defense was to claim that she had not shared the files. Her lawyer suggested her computer could have been under the control of people elsewhere due to “a spoof, a zombie or some other type of hack”.

The jury was instructed that merely “making available” sufficed to constitute an infringement of the plaintiffs’ distribution right, even without proof of any actual distribution. The issue of actual distribution was raised by the defense on the first day of trial, but the court sustained the plaintiffs’ objection and did not permit the topic to be revisited until jury instructions were prepared just before the end of the trial. Despite disagreement from the defense, the court proceeded to interpret making available as distribution for purposes of instructing the jury.

On October 4, 2007, after 5 minutes of deliberation, the jury returned a verdict finding her liable for willful infringement, and awarded statutory damages in the amount of $9,250 for each of the 24 songs, for a total of $222,000. However, the judge in Thomas’ trial ordered a retrial because recent case law has cast doubt on the theory of “making available” as sufficient for infringement.

In May 2009, during preparation for the retrial, Brian Toder stepped down as Thomas-Rasset’s lawyer. Thomas-Rasset then accepted Kiwi Camara’s offer to defend her pro bono. The retrial was held on June 15, 2009 where the jury was instructed to find the owners’ copyrights were infringed provided the ownership claims were valid and provided there was an infringement of either the reproduction right (via Thomas-Rassett “downloading copyrighted sound recordings on a peer-to-peer network, without license from the copyright owners”) or the distribution right (via Thomas-Rassett “distributing copyrighted sound recordings to other users on a peer-to-peer network, without license from the copyright owners”). For each song reproduced or distributed, the infringement had to be assessed as willful or non-willful, and damages assessed accordingly. The jury was not allowed to be specific about which rights (distribution or reproduction) were infringed, and unlike in the first trial, the judge didn’t attempt to define distribution.

The jury found Thomas-Rasset liable for willful copyright infringement of all the songs, and awarded to the plaintiffs statutory damages of $1.92 million ($80,000 per song, out of an allowed range of $750 to $150,000.

On July 6, 2009, Thomas-Rasset filed a motion asserting the statutory damage award was so disproportionate to actual damages as to be unconstitutional, and announcing her intention to appeal two prior court orders permitting the plaintiffs to present certain evidence at trial. The evidence in question includes allegedly incomplete and therefore inadmissible copyright registrations, and evidence collected by MediaSentry, which the motion claimed should have been inadmissible because it was collected in violation of state private investigator & wiretap statutes. The motion called for either a retrial with that evidence suppressed, a reduction of damages to the statutory minimum ($750 per song; $18,000 total), or a removal of statutory damages altogether.

The same day, the plaintiffs filed a motion asking for an injunction against Thomas-Rasset, which would require her to destroy all, infringing sound recordings on her computer, and desist from any further infringement of their copyrights. Their motion claims trial evidence established that Thomas-Rassett “was distributing 1,702 sound recordings…to millions of other users” and that the plaintiffs would face “great and irreparable harm” if she were to continue to infringe upon their copyrights.

Unlike most people, Thomas-Rasset never opted to settle with the RIAA, determining that she had the law on her side. Unfortunately for her the jury in this landmark case ruled she did not.

“We appreciate the jury’s service and that they take this issue as seriously as we do,” said Cara Duckworth, an RIAA spokeswoman. “We are pleased that the jury agreed with the evidence and found the defendant liable. Since day 1, we have been willing to settle the case and remain willing to do so.”

In the US juries can hand out fines up to an unbelievable $150,000 per infringement on a single song. The average settlement in related RIAA cases is around $3000, which is peanuts considering this recent verdict. In this light many people might be inclined to settle with the RIAA even when they don’t even own a computer.

Internet Ideology: Prosecuting Personal Use

Jessica Limman looks at the massive number of lawsuits aganst peer to peer sharing as a consequence of the industry latching on to the dangers of easy replication. In this aggressive enforcement strategy, “the efforts to capture control over personal use is moving further and further into the consumers homes.” Commercial use has now come to be defined as any “unlicenced activity.” She attempts to locate the space for the reader, listner, viewer and the general public in copyrights through the lens of personal use reinstating the claim that copyright law encourages authorship as much for the benefit of the public as for the authors and their distributors. Backup copying or transfering the digital media to a different player constitute unlicensed used that are still within the ambit of personal use. In the time of publishing, personal use did not require an excuse to be lawful. Fair use cases have mostly covered used that are both public or commercial or both so the use of the fair use factor for ascertaining the parameters of personal use is both clumsy and unhelpful. The problem is escalated by the large numbers of private citizens who are connected through the internet. Faced with such numbrs, the law deems the non commercial downloaders commercial as well and in that, “getting for free what you might otherwise have to pay for is a rather broad yardstick.”

The copyright industries savour their role as critical intermediaries in the copyright supply chain. To this end they are continually seeking to strengthen their legal entitlements by arguing that stronger copyright incentives fuel future creative action. But the reality of creativity is different from the linear economic reward / action relationship that these industries promote. Much of this creativity occurs without reference to the incentive structure provided by copyright law and demonstrates the potential redundancy of several existing industry functions. The result has been a seemingly intractable tension between established industries and emergent modes of production and dissemination that is evident in the cases listed above. The current debates over the utility of peer-to-peer technology and the competition between proprietary and open source software development models have emerged as the primary subjects of this debate. Paul Ganely examines the “incentive for creativity” argument in the digital arena judging it against the various online creative endeavors that are not prompted by the promise of economic rewards or even popularity or attribution. The internet represents a paradigm shift in the was we think about and relate to information and creativity. Ganely looks at the examples of blogs, wikipedia, the open source software, websites, search technology and torrents as making existing information more imprtant by facilitating new ways of access.

Wikipedia as a “self-conscious social-norms-based dedication to objective writing” eschews author-centric copyright incentives, whilst still managing to create an increasingly comprehensive resource that is more widely used than traditional reference works such as the Encyclopedia Britannica. The mozilla firefox web browser developed as an open source software is available for free download. Within a few years of its launch it has been able to present a significant alternative to the windows explorer. This in part may be due to the voluntary development of new versions and fixes to problems as compared to the need to wait for microsoft to release a proprietary verision of the windows software and offer upgrades. Blogs and networking websites supplant news channels and websites as the primary sources of information and also show revolutionary potential as was seen in the case of the South Korean website www.ohmynews.com which is credited with shaping the outcome of the presidential elections in Korea in 2002 and the recent use of twitter as a means of revolution and protest in the case of Iran.

The copyright industries view all these trends as threats and rely on the DRM technology to put protection blankets around the ways of distribution and dealing with works in the digital environment. The internet explodes the sanctity of originality in the contect of creative works and encourages the creatio of continuous flows that decenter the author function. Lawmaking tends to take from the romantic author assumptions and expand digital control, thereby, bypassing the traditional safety valves operating in the body of the copyright law itself.

Deborah Halbert expands this notion of the continuos flows in her examination of the transformation of authorship in the digital realm. She argues that the author is so firmly embedded in aur thought process that we look to the author as the owner instead of looking behind the role of authorship to the production of discourses in socirty. “our language is encases in the notions of “property”, “intellectual work”, “intellectual products” and “proprietary ideas” and copyright helps solidify the notion of authorship as a textual boundary. The first possible potential for authorship in the digital realm then, is to replace the emphasis on authorship with an emphasis on dialogue. The internet is a collection of people with little concern for the economic rationale offered by copyright. it gives primacy to the potential for dialogue and connectivity. Hypertext, is an important form that facilitates this interaction as one can move from document to document with little regard for or dependency on the author function. the one to many format of mass communication where the recepient has little control is replaced by a system that puts compltete control in the hands of the recepient. The internet’s many to many format. As each reciever is responsible for filtering and reshaping the information, the author loses centrality.

This system of flows frees information from the confines of property laws and makes it more akin to traditional oral modes of communication that allow space for multiple perspectives and voices and adapting the discourse to specific contexts. The internet too, however, is not an entirely free space and is beset with its own contradictions and shaped by commercial and proprietary agendas that seek to regulate its flows. Massive numbers of users are offset by the monopolisation of cyberspace by large industry plyaers like google and microsoft. The debate on its ultimate shape stands between the monopoly and commercial interests of the industry players using copyrights as a means of expanding control and the increasing numbers of users who seek to wrest control from the industry and make the internet a public realm for the free exchange of ideas.

Comments are closed.