Category Archives: Research

Stop Internet censorship —Internet Engineers

Parker Higgins and Peter Eckersley wrote for EFF 15 December 2011, An Open Letter From Internet Engineers to the U.S. Congress
Today, a group of 83 prominent Internet inventors and engineers sent an open letter to members of the United States Congress, stating their opposition to the SOPA and PIPA Internet blacklist bills that are under consideration in the House and Senate respectively.
The signatories are people such as Vint Cerf you may have heard of even if you know nothing about the technical details of Internet, and many other people who helped produce the network you are using now. I know many of them, and they are right. If you want a free and open Internet, call or write your Senators and Congress members today, and tell them to vote against PIPA and SOPA.

The full text of the letter is appended below.

-jsq

We, the undersigned, have played various parts in building a network called the Internet. We wrote and debugged the software; we defined the standards and protocols that talk over that network. Many of us invented parts of it. We’re just a little proud of the social and economic benefits that our project, the Internet, has brought with it.

Last year, many of us wrote to you and your colleagues to warn about the proposed “COICA” copyright and censorship legislation. Today, we are writing again to reiterate our concerns about the SOPA and PIPA derivatives of last year’s bill, that are under consideration in the House and Senate. In many respects, these proposals are worse than the one we were alarmed to read last year.

If enacted, either of these bills will create an environment of tremendous fear and uncertainty for technological innovation, and seriously harm the credibility of the United States in its role as a steward of key Internet infrastructure. Regardless of recent amendments to SOPA, both bills will risk fragmenting the Internet’s global domain name system (DNS) and have other capricious technical consequences. In exchange for this, such legislation would engender censorship that will simultaneously be circumvented by deliberate infringers while hampering innocent parties’ right and ability to communicate and express themselves online.

All censorship schemes impact speech beyond the category they were intended to restrict, but these bills are particularly egregious in that regard because they cause entire domains to vanish from the Web, not just infringing pages or files. Worse, an incredible range of useful, law-abiding sites can be blacklisted under these proposals. In fact, it seems that this has already begun to happen under the nascent DHS/ICE seizures program.

Censorship of Internet infrastructure will inevitably cause network errors and security problems. This is true in China, Iran and other countries that censor the network today; it will be just as true of American censorship. It is also true regardless of whether censorship is implemented via the DNS, proxies, firewalls, or any other method. Types of network errors and insecurity that we wrestle with today will become more widespread, and will affect sites other than those blacklisted by the American government.

The current bills — SOPA explicitly and PIPA implicitly — also threaten engineers who build Internet systems or offer services that are not readily and automatically compliant with censorship actions by the U.S. government. When we designed the Internet the first time, our priorities were reliability, robustness and minimizing central points of failure or control. We are alarmed that Congress is so close to mandating censorship-compliance as a design requirement for new Internet innovations. This can only damage the security of the network, and give authoritarian governments more power over what their citizens can read and publish.

The US government has regularly claimed that it supports a free and open Internet, both domestically and abroad. We cannot have a free and open Internet unless its naming and routing systems sit above the political concerns and objectives of any one government or industry. To date, the leading role the US has played in this infrastructure has been fairly uncontroversial because America is seen as a trustworthy arbiter and a neutral bastion of free expression. If the US begins to use its central position in the network for censorship that advances its political and economic agenda, the consequences will be far-reaching and destructive.

Senators, Congressmen, we believe the Internet is too important and too valuable to be endangered in this way, and implore you to put these bills aside.

Signed,

  • Vint Cerf, co-designer of TCP/IP, one of the “fathers of the Internet”, signing as private citizen
  • Paul Vixie, author of BIND, the most widely-used DNS server software, and President of the Internet Systems Consortium
  • Tony Li, co-author of BGP (the protocol used to arrange Internet routing); chair of the IRTF’s Routing Research Group; a Cisco Fellow; and architect for many of the systems that have actually been used to build the Internet
  • Steven Bellovin, invented the DNS cache contamination attack; co-authored the first book on Internet security; recipient of the 2007 NIST/NSA National Computer Systems Security Award and member of the DHS Science and Technology Advisory Committee
  • Jim Gettys, editor of the HTTP/1.1 protocol standards, which we use to do everything on the Web
  • Dave Kristol, co-author, RFCs 2109, 2965 (Web cookies); contributor, RFC 2616 (HTTP/1.1)
  • Steve Deering, Ph.D., invented the IP multicast feature of the Internet; lead designer of IPv6 (version 6 of the Internet Protocol)
  • David Ulevitch, David Ulevitch, CEO of OpenDNS, which offers alternative DNS services for enhanced security.
  • Elizabeth Feinler, director of the Network Information Center (NIC) at SRI International, administered the Internet Name Space from 1970 until 1989 and developed the naming conventions for the internet top level domains (TLDs) of .mil, .gov, .com, .org, etc. under contracts to DoD
  • Robert W. Taylor, founded and funded the beginning of the ARPAnet; founded and managed the Xerox PARC Computer Science Lab which designed and built the first networked personal computer (Alto), the Ethernet, the first internet protocol and internet, and desktop publishing
  • Fred Baker, former IETF chair, has written about 50 RFCs and contributed to about 150 more, regarding widely used Internet technology
  • Dan Kaminsky, Chief Scientist, DKH
  • Esther Dyson, EDventure; founding chairman, ICANN; former chairman, EFF; active investor in many start-ups that support commerce, news and advertising on the Internet; director, Sunlight Foundation
  • Walt Daniels, IBM’s contributor to MIME, the mechanism used to add attachments to emails
  • Nathaniel Borenstein, Chief Scientist, Mimecast; one of the two authors of the MIME protocol, and has worked on many other software systems and protocols, mostly related to e-mail and payments
  • Simon Higgs, designed the role of the stealth DNS server that protects a.root-servers.net; worked on all versions of Draft Postel for creating new TLDs and addressed trademark issues with a complimentary Internet Draft; ran the shared-TLD mailing list back in 1995 which defined the domain name registry/registrar relationship; was a root server operator for the Open Root Server Consortium; founded coupons.com in 1994
  • John Bartas, was the technical lead on the first commercial IP/TCP software for IBM PCs in 1985-1987 at The Wollongong Group. As part of that work, developed the first tunneling RFC, rfc-1088
  • Nathan Eisenberg, Atlas Networks Senior System Administrator; manager of 25K sq. ft. of data centers which provide services to Starbucks, Oracle, and local state
  • Dave Crocker, author of Internet standards including email, DKIM anti-abuse, electronic data interchange and facsimile, developer of CSNet and MCI national email services, former IETF Area Director for network management, DNS and standards, recipient of IEEE Internet Award for contributions to email, and serial entrepreneur
  • Craig Partridge, architect of how email is routed through the Internet; designed the world’s fastest router in the mid 1990s
  • Doug Moeller, Chief Technology Officer at Autonet Mobile
  • John Todd, Lead Designer/Maintainer – Freenum Project (DNS-based, free telephony/chat pointer system), http://freenum.org/
  • Alia Atlas, designed software in a core router (Avici) and has various RFCs around resiliency, MPLS, and ICMP
  • Kelly Kane, shared web hosting network operator
  • Robert Rodgers, distinguished engineer, Juniper Networks
  • Anthony Lauck, helped design and standardize routing protocols and local area network protocols and served on the Internet Architecture Board
  • Ramaswamy Aditya, built various networks and web/mail content and application hosting providers including AS10368 (DNAI) which is now part of AS6079 (RCN); did network engineering and peering for that provider; did network engineering for AS25 (UC Berkeley); currently does network engineering for AS177-179 and others (UMich)
  • Blake Pfankuch, Connecting Point of Greeley, Network Engineer
  • Jon Loeliger, has implemented OSPF, one of the main routing protocols used to determine IP packet delivery; at other companies, has helped design and build the actual computers used to implement core routers or storage delivery systems; at another company, installed network services (T-1 lines and ISP service) into Hotels and Airports across the country
  • Jim Deleskie, internetMCI Sr. Network Engineer, Teleglobe Principal Network Architect
  • David Barrett, Founder and CEO, Expensify
  • Mikki Barry, VP Engineering of InterCon Systems Corp., creators of the first commercial applications software for the Macintosh platform and the first commercial Internet Service Provider in Japan
  • Peter Rubenstein,helped to design and build the AOL backbone network, ATDN.
  • David Farber, distinguished Professor CMU; Principal in development of CSNET, NSFNET, NREN, GIGABIT TESTBED, and the first operational distributed computer system; EFF board member
  • Bradford Chatterjee, Network Engineer, helped design and operate the backbone network for a nationwide ISP serving about 450,000 users
  • Gary E. Miller Network Engineer specializing in eCommerce
  • Jon Callas, worked on a number of Internet security standards including OpenPGP, ZRTP, DKIM, Signed Syslog, SPKI, and others; also participated in other standards for applications and network routing
  • John Kemp, Principal Software Architect, Nokia; helped build the distributed authorization protocol OAuth and its predecessors; former member of the W3C Technical Architecture Group
  • Christian Huitema, worked on building the Internet in France and Europe in the 80’s, and authored many Internet standards related to IPv6, RTP, and SIP; a former member of the Internet Architecture Board
  • Steve Goldstein, Program Officer for International Networking Coordination at the National Science Foundation 1989-2003, initiated several projects that spread Internet and advanced Internet capabilities globally
  • David Newman, 20 years’ experience in performance testing of Internet
    infrastructure; author of three RFCs on measurement techniques (two on firewall performance, one on test traffic contents)
  • Justin Krejci, helped build and run the two biggest and most successful municipal wifi networks located in Minneapolis, MN and Riverside, CA; building and running a new FTTH network in Minneapolis
  • Christopher Liljenstolpe, was the chief architect for AS3561 (at the time about 30% of the Internet backbone by traffic), and AS1221 (Australia’s main Internet infrastructure)
  • Joe Hamelin, co-founder of Seattle Internet Exchange (http://www.seattleix.net) in 1997, and former peering engineer for Amazon in 2001
  • John Adams, operations engineer at Twitter, signing as a private citizen
  • David M. Miller, CTO / Exec VP for DNS Made Easy (IP Anycast Managed Enterprise DNS provider)
  • Seth Breidbart, helped build the Pluribus IMP/TIP for the ARPANET
  • Timothy McGinnis, co-chair of the African Network Information Center Policy Development Working Group, and active in various IETF Working Groups
  • Richard Kulawiec, 30 years designing/operating academic/commercial/ISP systems and networks
  • Larry Stewart, built the Etherphone at Xerox, the first telephone system working over a local area network; designed early e-commerce systems for the Internet at Open Market
  • John Pettitt, Internet commerce pioneer, online since 1983, CEO Free Range Content Inc.; founder/CTO CyberSource & Beyond.com; created online fraud protection software that processes over 2 billion transaction a year
  • Brandon Ross, Chief Network Architect and CEO of Network Utility Force LLC
  • Chris Boyd, runs a green hosting company and supports EFF-Austin as a board member
  • Dr. Richard Clayton, designer of Turnpike, widely used Windows-based Internet access suite; prominent Computer Security researcher at Cambridge University
  • Robert Bonomi, designed, built, and implemented, the Internet presence for a number of large corporations
  • Owen DeLong, member of the ARIN Advisory Council who has spent more than a decade developing better IP addressing policies for the internet in North America and around the world
  • Baudouin Schombe, blog design and content trainer
  • Lyndon Nerenberg, Creator of IMAP Binary extension (RFC 3516)
  • John Gilmore, co-designed BOOTP (RFC 951), which became DHCP, the way you get an IP address when you plug into an Ethernet or get on a WiFi access point; current EFF board member
  • John Bond, Systems Engineer at RIPE NCC maintaining AS25152 (k.root-servers.net.) and AS197000 (f.in-addr-servers.arpa. ,f.ip6-servers.arpa.); signing as a private citizen
  • Stephen Farrell, co-author on about 15 RFCs
  • Samuel Moats, senior systems engineer for the Department of Defense; helps build and defend the networks that deliver data to Defense Department users
  • John Vittal, created the first full email client and the email standards still in use today
  • Ryan Rawdon, built out and maintains the network infrastructure for a rapidly growing company in our country’s bustling advertising industry; was on the technical operations team for one of our country’s largest residential ISPs
  • Brian Haberman, has been involved in the design of IPv6, IGMP/MLD, and NTP within the IETF for nearly 15 years
  • Eric Tykwinski, Network Engineer working for a small ISP based in the Philadelphia region; currently maintains the network as well as the DNS and server infrastructure
  • Noel Chiappa, has been working on the lowest level stuff (the IP protocol level) since 1977; name on the ‘Birth of the Internet’ plaque at Stanford); actively helping to develop new ‘plumbing’ at that level
  • Robert M. Hinden, worked on the gateways in the early Internet, author of many of the core IPv6 specifications, active in the IETF since the first IETF meeting, author of 37 RFCs, and current Internet Society Board of Trustee member
  • Alexander McKenzie, former member of the Network Working Group and participated in the design of the first ARPAnet Host protocols; was the manager of the ARPAnet Network Operation Center that kept the network running in the early 1970s; was a charter member of the International Network Working Group that developed the ideas used in TCP and IP
  • Keith Moore, was on the Internet Engineering Steering Group from 1996-2000, as one of two Area Directors for applications; wrote or co-wrote technical specification RFCs associated with email, WWW, and IPv6 transition
  • Guy Almes, led the connection of universities in Texas to the NSFnet during the late 1980s; served as Chief Engineer of Internet2 in the late 1990s
  • David Mercer, formerly of The River Internet, provided service to more of Arizona than any local or national ISP
  • Paul Timmins, designed and runs the multi-state network of a medium sized telephone and internet company in the Midwest
  • Stephen L. Casner, led the working group that designed the Real-time Transport Protocol that carries the voice signals in VoIP systems
  • Tim Rutherford, DNS and network administrator at C4
  • Mike Alexander, helped implement (on the Michigan Terminal System at the University of Michigan) one of the first EMail systems to be connected to the Internet (and to its predecessors such as Bitnet, Mailnet, and UUCP); helped with the basic work to connect MTS to the Internet; implemented various IP related drivers on early Macintosh systems: one allowed TCP/IP connections over ISDN lines and another made a TCP connection look like a serial port
  • John Klensin, Ph.D., early and ongoing role in the design of Internet applications and coordination and administrative policies
  • L. Jean Camp, former Senior Member of the Technical Staff at Sandia National Laboratories, focusing on computer security; eight years at Harvard’s Kennedy School; tenured Professor at Indiana Unviersity’s School of Informatics with research addressing security in society.
  • Louis Pouzin, designed and implemented the first computer network using datagrams (CYCLADES), from which TCP/IP was derived
  • Carl Page, helped found eGroups, the biggest social network
    of its day, 14 million users at the point of sale to Yahoo for around $430,000,000, at which point it became Yahoo Groups
  • Phil Lapsley, co-author of the Internet Network News Transfer Protocol (NNTP), RFC 977, and developer of the NNTP reference implementation
  • Jack Haverty (MSEE, BSEE MIT 1970), Principal Investigator for several DARPA projects including the first Internet development and operation; Corporate Network Architect for BBN; Founding member of the IAB/ICCB; Internet Architect and Corporate Founding Member of W3C for Oracle Corporation
  • Glenn Ricart, Managed the original (FIX) Internet interconnection point

Lessig’s Herculean Holiday Present: Reboot the FCC

1990.05.0243.jpeg Here’s a good test for the new U.S. Executive: to recognize that steady pragmatism means radical change, starting with the FCC:
The solution here is not tinkering. You can’t fix DNA. You have to bury it. President Obama should get Congress to shut down the FCC and similar vestigial regulators, which put stability and special interests above the public good. In their place, Congress should create something we could call the Innovation Environment Protection Agency (iEPA), charged with a simple founding mission: “minimal intervention to maximize innovation.” The iEPA’s core purpose would be to protect innovation from its two historical enemies—excessive government favors, and excessive private monopoly power.

Reboot the FCC, We’ll stifle the Skypes and YouTubes of the future if we don’t demolish the regulators that oversee our digital pipelines. By Lawrence Lessig, Newsweek Web Exclusive, 23 Dec 2008

Lessig gets the connection with his old topic of intellectual property and copyright. Those are monopolies granted by the federal government, and they have been abused by the monopoly holders just like the holders of communication monopolies: Continue reading

Jettisoned: 8 Centuries of Common Carriage Law

puzzle-grey-data-header.jpg Someone at CAIDA (presumably kc Claffy by the writing style), went to
an invitation-only intensely interactive workshop on the topic of Internet infrastructure economics. participants included economists, network engineers, infrastructure providers, network service providers, regulatory experts, investment analysts, application designers, academic researchers/professors, entrepreneurs/inventors, biologists, oceanographers. almost everyone in more than one category.

internet infrastructure economics: top ten things i have learned so far, by webmaster, according to the best available data, October 7th, 2007

and wrote up a report including this summary of the political situation:
…and it turns out that in the last 5 years the United States — home of the creativity, inspiration and enlightened government forces (across several different agencies) that gave rise to the Internet in the first place — has thoroughly jettisoned 8 centuries of common carriage law that we critically relied on to guide public policy in equitably provisioning this kind of good in society, including jurisprudence and experience in determining ‘unreasonable discrimination’.

and our justification for this abandonment of eight centuries of common law is that our “government” — and it turns out most of our underinformed population (see (1) above) — believes that market forces will create an open network on their own. which is a particularly suspicious prediction given how the Internet got to where it is today:in the 1960s the US government funded people like vint cerf and steve crocker to build an open network architected around the ‘end to end principle’, the primary intended use of which was CPU and file sharing among government funded researchers. [yes, the U.S. government fully intended to design, build, and maintain a peer-to-peer file-sharing network!]

That’s right folks: “resource sharing” was the buzzword back then, and every node was supposed to be potentially a peer to every other. Continue reading

Revive OTA?

OTA_seal.png Just last week I was talking to somebody who used to work for the Office for Technology Assessment, which was a bipartisan Congressional research group that brought in various outside experts to help out. She recognized me from various times I showed up.

Serendipitously, Susan Crawford says “OTA: You Are Missed“.

Nearly a decade ago, Congress closed its Office of Technology Assessment. The president of the Federation of American Scientists, a former OTA employee, called the closing the “equivalent of a self-inflicted lobotomy.” Between 1974 and 1995 OTA produced 750 thorough reports about a wealth of scientific and technical studies.

Since then, the Congressional Research Service (thanks, CDT!) has been providing Congress with quick summaries of issues, but CRS doesn’t have the deep technical expertise that OTA did, or the resources to do sustained studies. The National Academies have the time and the resources, but they take too long and they have too many constituents to serve.

In re-writing the Telecom Act and jumping into having the FCC regulate the internet, it would be good to have a neutral, expert, bipartisan group advising Congress about the consequences of their actions.

For example, such a group might have told Congress that current antitrust law isn’t well positioned to deal with problems of lack of competition since broadband was wrenched from one legal regime into another.

-jsq

Net Neutrality Won’t be Fixed by Anti-Trust: B. Cherry

CherryTPRC2007p13.gif At TPRC Sunday, Barbara Cherry walked through the evolution of bodies of law in the U.S., and made some fascinating observations, including:
  • Net neutrality is a manifestation of moving from a Title II industry-specific business legal regime under the Communications Act of 1934 to a Title II-based regime and greater reliance on a general business regime of antitrust and consumer protection laws, as the FCC did in August 2005 for wireline broadband access service to the Internet and in 2002 for cable modem access service.
  • Simply mMoving among traditional and deregulatory legal regimes for transportation carriers does did not strip common carriage status; it merely changesd the legal overlay that enforcesd it.
  • FCC stripping broadband of common carriage was a radical departure: nothing classified as common carrier has ever been declassified before.
  • Anti-trust doesn’t automatically cover problems from previously addressed in the Title II industry-specific regime when a business is moved to the Title II general business regime. Anti-trust needs modification to do this.
  • Liability is also different between regimes. Without tariffs some legal protections for limited liability constraints are gone, and common carriers are now potentially fully liable for damages. The final filed rate doctrine should have no applicability to a detariffed world.
The above is, I think, a reasonably close paraphrase of some of her points.

I infer from this that the economists and politicians and telco and cableco executives who say that we shouldn’t regulate because we don’t know what will happen and anti-trust will catch problems if they occur are not taking into account that anti-trust doesn’t automatically apply to or address problems in the new legal regime into which broadband has been thrust.

In other words, people see things in the context of what they know, and economists don’t usually know about legal evolution.

Telco and cableco executives, on the other hand, may well have business and political reasons for claiming there’s no need for regulation, whether or not they know that existing anti-trust law is inadequate. doesn’t apply.

You can’t have markets without some form of property rights of contract law. There is also basic legal infrastructure you need for communication infrastructure.

I see little or no understanding of these points in FCC, FTC, or Congress.

Prof. Cherry’s whole paper is well worth reading: Consumer Sovereignty: Redrawing the Boundaries Between Industry-Specific and General Business Legal Regimes for Telecommunications and Broadband Access Services, by Barbara A. Cherry, TPRC, 30 Sep 2007

-jsq

PS: Markup for increased accuracy kindly supplied by Prof. Cherry.

Benton, Universal Service, TPRC, Social Contract

bentonfoundation.png Many good papers on aspects of universal service at the Benton Universal Service Project:
As Congress and the FCC put universal service reform at the top of its telecom policy agenda, the Benton Foundation is supporting a series of papers advancing a new vision for Universal Service — for making broadband as universal as telephone service is today and a pathway for retaking the lead as a broadband leader. This project outlines the policy rationale, the pathway forward, and the 12 key steps for advancing universal broadband and modernizing the universal service program for the information age.
Many of the authors of the papers are on a panel this afternoon at TPRC, including topicssuch as
The social contract implicit in telephony universal service versus the social contract implicit in broadband universal service.
Hm, maybe Verizon could learn from that one?

-jsq

Internet as Analysis Supplier: Is the Surge Working?

michael_greenstone.jpg Steven Levitt points out there are other ways to measure the effects of a military action than listening to politicians or generals, and the Internet can promote the production of such measures, and the analysis of them by multiple parties. On several measures, M.I.T. professor Michael Greenstone finds results of the U.S. “surge” in Iraq to be mixed. Then he brings in another measure:
The most interesting part of Greenstone’s paper is his analysis of the pricing of Iraqi government debt. The Iraq government has issued bonds in the past. These entitle the owner of the bond to a stream of payments over a set period of time, but only if the government does not default on the loan. If Iraq completely implodes, it is highly unlikely that these bonds will be paid off. How much someone would pay for the rights to that stream of payments depends on their estimate of the probability that Iraq will implode.

The bond data, unlike the other sources he examines, tell a clear story: the financial markets say the surge is not working. Since the surge started, the market’s estimate of the likelihood of default by the Iraqi government has increased by 40 percent.

Is the Surge Working? Ask the Data, Not the Politicians, By Steven D. Levitt, Freakonomics, September 15, 2007, 11:55 am

This kind of analysis seldom gets written for traditional channels because (2) there’s no academic incentive for it and (3) the only money in it is usually from special interests. Here’s the main point:
1. This paper shows how good economic analysis can contribute in a fundamental way to public policy. Anyone who reads Greenstone’s article will recognize that it is careful and thorough. It is even-handed and apolitical. It combines state-of-the-art data analysis techniques with economic logic (e.g., using market prices to draw conclusions about how things are going).

4. The internet can potentially solve both problems (2) and (3) above, leading to an increased supply of good, timely analysis. If people like Greenstone can immediately get their findings into the public debate through the internet, it gives a real purpose (not just an academic one) to doing the work. In addition, there are now online peer-reviewed academic journals that have greatly sped the time from submission to publication, potentially increasing the academic payoff to someone like Greenstone. With many respected economists now blogging, there is also a vehicle for these folks to weigh in on the quality of policy-related economic writings — like I am doing in this blog post.

If the Internet helps focus many eyes on bugs and make them shallow, why can’t it do the same with political and military actions?

Right now it can. Without net neutrality it wouldn’t be able to.

-jsq