[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Implications of v6 on application level rate limiting...



>> And please please please make this aggregation policy case based or never aggregated above /64. There are a lot of ISP's out there who have different productsm  with very different address plans which are all in the same /32 and can be quite close together, with a default policy of /56 for broadband residential, /48 for business and /64 for mobile.
>> So what seem to be a good and decent aggregation for DSL can all of a sudden create huge problems as you unintentionally block or ratelimit 65.000 mobile users from that same ISP.
> 
> I fear the same question will arise for people running spam black lists.
> 
> The space is so large that the allocation policies can vary massively from one /32 to the next. At one extreme an LIR with a /32 need not even be a provider and may allocate blocks to separate providers so aggregating at the /32 would be silly as the LIR has no control over then end users at all. In other cases millions of separate users may have a single device IP allocated on a /127 PPP link.
> 
> If it is any help, as an ISP that has been doing IPv6 on DSL for 7 years, we allocate a /48 to each distinct customer regardless of how many lines they have. We give them control over what blocks (down to /64) are allocated to each line or lines as they need.
> 
> Of course you could whois the requester IPv6 address to see the block size allocated to the requesting end user. But the whois server providing that information might rate limit you !!!

I made a suggestion to one of the ripe lists on how we might fix that a couple of months ago, details are here:

http://www.ripe.net/ripe/maillists/archives/ipv6-wg/2009/msg00142.html

Now this was just an idea I have and either people don't find it important enough or think it's stupid, as I haven't seen much response on it I fear the first is true. Or maybe I just missed it and everybody running blacklists found a way to safely predict which blocks can be easily aggregated or which don't.

I had a discussion with some people about this and one of the suggestions made was that maybe the IETF would be a more suitable place to try and fix this as it has global reach and you don't have to go over the exercise 5 times for each individual registry. Somebody even suggested we might fix this by using DNS as a publication method.

I think the base requirement is to find a way in which operators can easily publish some hints on where aggregation boundaries are, possibly together with some contact information on who to notify about issues.

This system:

- has to be easily filled and updated by operators
- has to be even easier to use for those people trying to block/limit IPv6 address blocks
- well documented API for automatic retrieval
- preferably uses a single method or entrypoint to get data on all RIRs
- either has a distributed architecture or caching mechanism to hande the load

Once this is in place it can benefit both sides, you don't unintentionally block a whole lot of users and at the same time prevent virusses/trojans/zombies from address hopping to try and circumvent blocklists. I can also see some gain from a operational perspective, as the whole customer is shutdown in one go instead of only a handfull of his million billion addresses, it's much easier to identify the problem.

MarcoH