June 24, 2024

Nuts and Bolts of Net Discrimination: Encryption

I’ve written several times recently about the technical details of network discrimination, because understanding these details is useful in the network neutrality debate. Today I want to talk about the role of encryption.

Scenarios for network discrimination typically involve an Internet Service Provider (ISP) who looks at users’ traffic and imposes delays or other performance penalties on certain types of traffic. To do this, the ISP must be able to tell the targeted data packets apart from ordinary packets. For example, if the ISP wants to penalize VoIP (Internet telephony) traffic, it must be able to distinguish VoIP packets from ordinary packets.

One way for users to fight back is to encrypt their packets, on the theory that encrypted packets will all look like gibberish to the ISP, so the ISP won’t be able to tell one type of packet from another.

To do this, the user would probably use a Virtual Private Network (VPN). The idea is that whenever the user’s computer wanted to send a packet, it would encrypt that packet and then send the encrypted packet to a “gateway” computer that was outside the ISP’s network. The gateway computer would then decrypt the packet and send it on to its intended destination. Incoming packets would follow the same path in reverse – they would be sent to the gateway, where they would be encrypted and forwarded on to the user’s computer. The ISP would see nothing but a bi-directional stream of packets, all encrypted, flowing between the user’s computer and the gateway.

The most the user can hope for from a VPN is to force the ISP to handle all of the user’s packets in the same way. The ISP can still penalize all of the user’s packets, or it can single out randomly chosen packets for special treatment, but those are the only forms of discrimination available to it. The VPN has some cost – packets must be encrypted, decrypted, and forwarded – but the user might consider it worthwhile if it stops network discrimination.

(In practice, things are a bit more complicated. The ISP might be able to infer which packets are which by observing the size and timing of packets. For example, a sequence of packets, all of a certain size and flowing with metronome-like regularity in both directions, is probably a voice conversation. The user might use countermeasures, such as altering the size and timing of packets, but that can be costly too. To simplify our discussion, let’s pretend that the VPN gives the ISP no way to distinguish packets from each other.)

The VPN user and the ISP are playing an interesting game of chicken. The ISP wants to discriminate against some of the user’s packets, but doesn’t want to inconvenience the user so badly that the user discontinues the service (or demands a much lower price). The user responds by making his packets indistinguishable and daring the ISP to discriminate against all of them. The ISP can back down, by easing off on discrimination in order to keep the user happy – or the ISP can call the user’s bluff and hamper all or most of the user’s traffic.

But the ISP may have a different and more effective strategy. If the ISP wants to hamper a particular application, and there is a way to manipulate the user’s traffic that affects that application much more than it does other applications, then the ISP has a way to punish the targeted application. Recall my previous discussion of how VoIP is especially sensitive to jitter (unpredictable changes in delay), but most other applications can tolerate jitter without much trouble. If the ISP imposes jitter on all of the user’s packets, the result will be a big problem for VoIP apps, but not much impact on other apps.

So it turns out that even using a VPN, and encrypting everything in sight, isn’t necessarily enough to shield a user from network discrimination. Discrimination can work in subtle ways.


  1. Tom Hudson says

    Alex Satrapa said, of adding loss: “1% will be high enough to cause dropouts for VoIP purposes.”

    Umm, isn’t streaming media (video or audio) the classic case of a loss-tolerant application? I thought I’d read stacks of papers that reported adequate quality with loss rates well over 1%. That’s why they (can) use UDP instead of TCP.

  2. Alex Satrapa says

    Forget intentionally adding jitter – all the ISP has to do in order to mess things up for VoIP services is randomly drop packets. You can randomly drop 1% of the packets from a particular client without seriously degrading their TCP connections, and 1% will be high enough to cause dropouts for VoIP purposes. Randomly dropping packets might be a way of marking a particular service as “consumer level” with premiums charged for gamers or VoIP services.

    I think it’s just time to go set up a new Internet, one where salespeople just don’t get a say. Sure it’ll only be 1 bit per second to the USA, but we’ll be free!

  3. Ned Ulbricht says

    The point I mainly wanted to get across, though, is that nearly *all* internet traffic will be encrypted in about 7-10 years, making it impossible to do any sort of layer 7 discrimination. This is not a case of “fighting back”, this is a case of “it’s gonna happen anyway”.


    You mention “end-to-end symmetric encryption” and opine, “It would be suicide to appear to be in favor of unencrypted traffic, with all of the ‘cybercrime’ going around.”

    For a counter-argument, I would refer you to the DoJ reply brief (27 Feb 2006) in ACE v FCC.

    Federal, state, and local governments depend on lawful electronic surveillance to protect the national security and public safety of the United States and its people. […]

    The ability of law enforcement agencies to perform this vital task can be frustrated when communications providers deploy new technologies that fail to accomodate authorized surveillance. Accordingly, the Executive Branch and Congress jointly undertook in 1994 to enact legislation that would protect the ability of law enforcement agencies to conduct authorized surveillance in the face of evolving telecommunications technologies and services. The Communications Assistance for Law Enforcement Act is the result of that joint undertaking.

    CALEA subjects telecommunications carriers and equipment manufacturers to a variety of surveillance-related obligations. […]

    CALEA was sought by the Executive Branch and enacted by Congress because ongoing technological changes in the telecommunications industry were progressively undermining the ability of law enforcement agencies to carry out authorized surveillance. As the Director of the FBI explained, “the technology is running at such a pace that we could be out of the wiretap business in a very short period of time.” […]

    As part of this effort to ensure that technological changes do not
    effectively repeal legal surveillance authority, Congress adopted a
    flexible and forward-looking definition of “telecommunications carrier.”

    (pp.1-4 / pp.9-12 in PDF)

    Over the years, the DoJ seems to have had a fairly consistent opinion regarding consumer “end-to-end encryptation”.

  4. Thanks for responding to my encryption comment so thoroughly.

    I completely agree that, right now, the ISP can trivially prioritize based upon the “is it encrypted?” status. Most encrypted protocols run on well known ports, and have easily identifiable handshake and header patterns even when they don’t. You can always charge based upon the amount of bandwidth a person uses, and encryption really has no bearing on that whatsoever. The point I mainly wanted to get across, though, is that nearly *all* internet traffic will be encrypted in about 7-10 years, making it impossible to do any sort of layer 7 discrimination. This is not a case of “fighting back”, this is a case of “it’s gonna happen anyway”.

    IPSec is actually a part of the IPv6 specification, and there are already automated ways of setting up end-to-end symmetric encryption. IPv6 also includes possibly encrypted “routing” headers, specifying various hops, that will allow you to obfuscate the actual destination, no VPN required, random-walk style. Symmetric ciphers like blowfish (commonly used in IPSec) are incredibly fast (24 instruction cycles per 64bit block of data, on 32bit systems, according to the designer), and with processor speeds getting faster at a much slower rate than encryption standards are getting stronger, the amount of resources requires to secure any communication line will be trivial — you won’t even have to change the existing protocols, because IP itself will take care of it.

    You mentioned the ability to add “jitter” into a stream in such a way that it would interfere with certain, low-latency protocols like VoIP. I don’t see this as being plausible in the long term. Right now, it’s perfectly plausible, but this is like saying that CD burners are susceptible to buffer underruns. They were, of course, and the way we solved that was to increase the buffer size (and then SafeBurn technology, but this is unrelated). Similarly, you can “smooth over” any amount of jitter with an appropriate buffer, and VoIP can easily handle a consistent half-second buffer. That’s about what you have on cellphone-to-cellphone links, anyway, and people don’t notice until they’re talking over the phone to someone in the same room. All the gamers would scream if their ping times went above 500ms. Add to this that latency is getting lower across the board, and bandwidth is getting higher across the board, and we have a system that converges upon jitter not mattering so much.

    While I don’t disagree with anything you’ve said here, I still don’t think it will be all that relevant, a few years from now. ISP’s will have the choice of charging more for encrypted streams, but they won’t, because of the privacy issues involved. It would be suicide to appear to be in favor of unencrypted traffic, with all of the “cybercrime” going around. They will still have the choice of treating your packets differently, but the fact that they’re encrypted won’t be what knocks you into the “low priority” category: most traffic will be encrypted, and it will be the “legitimate” traffic that first starts to become encrypted — for example, banks switched to using SSL fairly quickly, both internally and customer-facing, for the obvious security-related reasons.

    As you mentioned, to do any sort of protocol-specific filtering of an encrypted stream, the ISP would have to do flow analysis. This is kinda difficult, especially if it were important to avoid false-positives, which I think it would be. They can only use what they know about the packets, which will be when, from where, to where, and how large. The issue: constant streams can be gaming, file transfers, voice, video, remote terminals, or anything else we can make that involves high throughput. It’s a very difficult problem to tell these apart if you can’t inspect the packets, even if you do know what time it is, who’s sending them, and what their first routing hop is.

    None of this, of course, affects the viability of bandwidth discrimination. People using more bandwidth get lower priority in the queuing process. That’s nothing new; routers have been doing that for decades under the guise of “fair queuing”. I don’t think that has anything to do with the “net discrimination” you’ve been discussing.

    I’m willing to hear evidence to the contrary, of course, but even accepting everything you’ve said in this series of bloggings as pure, unadulterated fact, I still think it looks like “net discrimination” won’t matter in a few years. People will just start paying based upon the bandwidth they use — instead of a flat fee — and that will be that. That’s how the larger bandwidth connections have been priced since their inception (well, since people started charging, anyway). More likely, the ISP’s will tier their plans into inexpensive, limited-bandwidth plans for the people who just browse the web and read their email, and expensive, unlimited-bandwidth ones for the people that want to do lots of gaming and downloads. You know, like cell phone plans.

    In short, although the ISP’s can make a quick buck or two in the next year or so, there’s no need to worry about long-term discrimination. It just won’t happen. Can’t happen. I think.

  5. Dan Krimm says


    Isn’t it possible that the discrimination logic will work in the opposite direction? That is, opt-in versus opt-out?

    If “premium” service is only available to those who pay to opt-in, they will have to provide the ISP a way to verify their packets in order to get premium service. Thus encrypting packets would only ensure that you could never get premium service, unless you contracted with the ISP to decrypt your packets along the way (perhaps at entry points to the premium pipes?).

    Are you sure that the differentiation would come in the form of artificial degradation as opposed to privileged premium throughput? How is this stuff being designed?

  6. Edward Kuns says


    P2P traffic is more demanding only by its sheer bulk, as others have already pointed out. P2P users account for a clear majority of network traffic on many networks, and for a small percentage of users.

    The problem is that ISPs have an incentive to discriminate against those who use the most bandwidth on a sustained basis (which is reasonable) but they also have an incentive to discriminate against external services where they provide a competing service (such as VOIP). When there are only one or two high speed ISPs in an area — all of which are also VOIP providers — they have a significant incentive to disrupt external providers of VOIP.

    They seemingly have an even larger incentive to discriminate against the p2p users due to the cost of providing to those people, yet they seem to talk more about discriminating not against high bandwidth users but against external service providers.

  7. How, exactly, is p2p traffic any more demanding on an ISP than other traffic, bandwidth aside? All the ISP sees (or is supposed to see, anyway) is packets flowing by with various destination addresses. So unless some addresses are more expensive to route to than others, somehow, the only variable cost the traffic imposes on the ISP is bandwidth!

  8. have already claimed extreme ignorance on this subject so keep that in mind when I ask:

    was reading about µTorrent 1.5 recently and they mentioned Protocol Encryption (Message stream encryption) – is this the sort of encryption that prevents packet discrimination?


  9. Mitch Golden says

    One point that shouldn’t be missed here is that many, if not most, people who use VPNs for business are using it to access a system at work. Quite often they are using a remote desktop via Microsoft Terminal Services or VNC or some such. If the ISP put jitter into such a connection it would immediately be apparent and unpleasant to the user. Moreover, the data stream from the use of a remote desktop over VPN probably looks quite a bit like an encrypted voip stream.

  10. Interesting thoughts. Let’s talk about counter-measures for a moment. If I wanted to manage a user’s traffic and he ran it all thru a VPN to a host outside my network, I would use the same techniques I do today. I would watch his instantaneous bandwidth consumption and his aggregate bandwidth consumption over time. I would monitor that via his source IP address. When he hits numbers that seem excessive I would simply throttle his traffic via his IP address. It is as others have said, why should I allow five percent of my users to degrade the quality of service I provide to the other 95 percent? Rest assured, I will not. I will instead terminate the user’s account and willing pay his first month with Verizon.

    I agree with your thoughts in principle, the ISP has no alternative other than to impact all of the traffic from that individual. I think also that it is pertinent to realize that is the only fair way to treat customers. If you can’t discriminate by traffic type to ensure that latency sensitive applications are favored and volume applications are pushed to the rear of the queue, then you have to discriminate by individual user.

    I also think it pertinent to mention that at some point most, if not all, packet streams will be encrypted. We always move from a position of less security to a position of more security. We do that as bad people figure out how to exploit the weaknesses in the fabric of our network.

  11. P2P is a resource hog. CPU and backplane resources…not bandwidth.

    An ISP’s business model is built on patterns of use.

    The ISP can spend 1/2 a million on a 6500 or 7600 chassis fully loaded. That’s a 3.2GB/second backplane.

    The ISP spends another 1/2 a million on a CMTS of ADSL switch.

    The ISP spends another 1/4 million on OPERATING costs for that gear.

    When 5% of the users are consuming 90% of the CPU and Backplane, then 95% of the users are inconvenienced. The ISP has a responsibility, to customers AND investors…not JUST customers…and building infrastructure to suit the 24x7x365 file traders REALLY ISN’T ECONOMICALLY FEASIBLE.

    Protecting LOCAL infrastructure should not be mistaken for or confused with Net Neutrality. It’s the Tier 1 carriers like Verizon and AT&T who are blathering about tiered services, the ISP is a customer of the Tier 1 carrier.

    Local infrastructure is often hundreds of miles from the tier 1 carrier interchanges.
    Local infrastructure is RARELY the 1/2 million dollar big honking router…more like a 2948 or 3500.
    and thousands of simultaneous incoming P2P download requersts will QUICKLY overwhelm a 2948.

    Your arguments regarding P2P throttling and encrypting, while laudable, are not economically feasible to the average Cable company. They still wind up with 5% of the local users consuming 95% of the local resources, meanwhile gamers, e-mailers and web surfers suffer as the ISP’s local office struggles to maintain *reasonable service, but can’t justify a million and a 1/2 one time investment to upgrade LOCAL infrastructure…a customer base of 2000 can’t support that kind of investment and still turn a profit.

    The stock holders would HANG them for spending that kind of money….

  12. I doubt that a local ISP could effectively modify the terms of use to throttle encrypted traffic, because it’s fairly easy to encrypt things so that their statistics are similar to unencrypted traffic (and the ISP doesn’t have a lot of cycles to do this analysis). But they could be just as effective by maintaining a list of gateways/proxies/whateveryoucallthem to which routing should not be particularly good.

    (And how bad can jitter get? Out here in the sticks on a bad day, ping times to my former ISP have been running anywhere between 55 and 15000 milliseconds. There are plenty of apps other than VOIP that have trouble with that…)

  13. VPN can be used only to keep the contents of packets out of the hands
    of a specific ISP or groups of ISP. But somewhere you need either a private
    destination or a gateway of somekind to decode the encryped packets.

    The local ISP would see packets with encryped contents and could
    throttle all traffic of that type. All it would have to do is modify the
    terms of use to require that all traffic for a certain class of accounts
    be unencrypted and then charge a premium for accounts that allow

    The local ISP would probably still give really poor service because after
    all your email and browser work OK so the service must be working

  14. Tim,

    The point of the encryption scheme I described is not just to obscure the data being communicated, but also to hide from the ISP the true destination of outgoing packets (and the true source of incoming ones). By routing all packets through a single intermediary, the user takes away the ISP’s ability to discriminate based on the address with which the user is communicating.

  15. Ed these posts are useful. But do you really think that discrimination by address is subtle? Seems to me the most basic and prevalent means, no, and one that encryption doesn’t have much bearing on.

  16. It’s not that network equipment (whether Nortel, Cisco, etc) is designed to cause jitter. You’re not going to find a checkbox for ‘Add jitter to port x’.

    However, that doesn’t mean you can’t do nasty things anyway. You can use equipment that oversubscribed and minimize queue usage. This can cause problems for UDP or RTSP traffic (such as voice/video).

    One many types of routers, you can set CIR/BIR (Commited Information Rate/Burst Info Rate) on any number of packet inspection criteria. This is certainly something that can be used for good or for evil.

    In a nutshell, any system that allows prioritization can be used maliciously – Just jiggle a few settings and your router/switch/app switch will be using all those fancy QoS features for things the manufacturer never intended.

  17. An excellent topic for discussion – thanks for expanding on it. I use VoIP and have no illusion that Comcast will be nice to me in the long run. (I just need that handy Google fibre to my doorstep, then I won’t have to worry.) *Ahem*

    I realize that this is a naïve question, but how would an ISP or telco go about introducing jitter into users’ connections? That’s not the kind of thing Cicso equipment is designed to do so one would think, or hope.

  18. Plenty of gamers would be upset by latency and jitter.

    But, anyway, yes, there could be two tiers of undiscriminated service:
    1) Cheap and nasty
    2) High quality and expensive

    That’s fine and dandy. Nothing wrong with ISPs deliberately degrading one service to put a premium on another – as long as they’re upfront about it to their customers.

    The real issue is why on earth ISPs want to piss about inspecting packets, as if some packets are inherently more valuable than others – as if they wished to charge on content/communication type rather than bandwidth or QoS.

    The UK postal service did once (perhaps still) offer different rates according to whether letters were sealed or not – irrespective of whether they actually read them.

    If ISPs et al start putting a premium on encrypted content, but let unencrypted, easily recognisable traffic enjoy cheaper, but tiered QoS, then steganography isn’t really much of a solution either.

    If this movement is driven by efficiency based cost benefits then it is inevitable. The question is whether revenue gains through pareto optimisation are outweighed by inefficiencies introduced by traffic shaping.

    Let’s hope ISPs aren’t a herd of asses chasing carrots or this will be an expensive, thermodynamic folly that everyone will pay for.