Redesigning the Internet: Ports and Society

Spheres_01_3h.jpg I’d heard about the NSF’s Future Internet Design (FIND) project, but hadn’t really paid attention to it. There was a panel at TPRC, with Dave Clark and other participants. My thoughts here are perhaps in some way derived from what somebody said, but no panel participants should be held responsible for what I write here.

Many interesting issues include what do do about firewalls: redesign to upgrade them or to eliminate the need for them?

How could you eliminate the need for firewalls? Well, they filter by ports, and they need to do that because well-known ports are the way Internet clients traditionally find servers. That’s sort of a historical accident. The MIT CHAOSNet protocols did not have well-known ports. Xerox’s network protocols used random numbers for rendezvous.

But if a firewall can’t filter on ports, haven’t you made it worse?

Not necessarily. Maybe use a bigger port address space. That could require 2^32 tries for an intruder to see which port your web server is on. Before they found it, your intrusion detection software would notice them searching.

However, it’s necessary to look at effects on all stakeholders, not just technical complexity or ease of implementation or even completeness of solution of one particular problem. ISPs want to be able to tell applications apart for traffic planning. Without it, all they can see is you’re behaving randomly. And they currently use ports to distinguish applications. There are also academic researchers and commercial companies that do that.

Computer science people aren’t trained to deal with motivations or stakeholder analysis; instead with performance. This is one reason the FIND project has participants who are not computer scientists. They have economists and may be adding lawyers. They need social and cultural and policy input.

One of the biggest issues is trust, because trust is the source of security. With only a few people participating, you can have trust and be open to all parties. When you add people you don’t trust, you can’t have quite as much transparency. You need more constraints, for example to stop viruses. The buzzword for this is “trust-modulated transparency.”

To have trust first you need identity. However, identity could be private among communicating participants. Nobody else need even see the strategy for identification. The other end of the identity spectrum is government-issued identity. Everybody who can watch can tell who’s talking. Spooks and cops like that, so they can identify people who don’t wish to be identified. Other people may not like that. Given today’s news, bloggers in Myanmar come to mind. They’re all registered with the government, so the ones who have been sending out pictures of suppression of demonstrations may have preferred less identity and more anonymity.

Personally, I’ve noticed that distribution is one of the main things that has permitted the Internet to grow and spread as rapidly as it has, and I think centralized identity would be counterproductive.

Hm, maybe someday we could get to a Buddhist trust model, where trust works best when you don’t know your own identity….

I suppose that would be the opposite of the duopoly model, where whichever of your two local first-mile ISPs you choose then chooses your identity as paying user, cracker, political undesirable, terrorist, etc.

-jsq