[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: FYI: DNSOPS presentation



Hi,

* Philip Homburg

> In your letter dated Tue, 20 Apr 2010 09:49:03 +0200 you wrote:
>> For a large provider, 0.078% of all users could very well mean
>> millions of users world-wide.

After sending my previous message I did the math on this statement, and
I realise it's an exaggeration - so I'd like to moderate «millions» to
«hundreds of thousands».   (Not that it changes any of my points.)

> I don't if it would fly, but why not white-list ISPs that have their 
> web-servers, MXen, pop and imap servers all listed in DNS with IPv6
> addresses? (I wonder how many ISPs would qualify at the moment)

Not many, I think.  Such a requirement would probably cause Google's
current whitelist to shrink considerably - Comcast, for instance, would
no longer be eligible.  Personally I will not deploy a whitelist at all
due to the overhead with maintaining it, instead I'll stay IPv4-only
until my customers feel the client loss is low enough to deploy
dual-stack (with global visibility).

> The thing that IMHO went wrong here is that the first thing everybody
> does is disable neighbor discovery, and therefore also neighbor
> unreachability detection.

Are you talking about ICMPv6 ND here?  If so, that is not end-to-end, it
only works on a link, similar to IPv4 ARP.  It is end-to-end
connectivity that is broken, so I fail to understand what you feel could
be done better here?

> Eventhough it is perfectly possible to find out that a connection
> doesn't work, in practice nobody does anything with that option.

I think Windows XP did some form of reachability testing and would
disable the 6to4 interface entirely if it failed.  They removed it in
Vista, but I've heard they're now considering putting back in.  But
Windows doesn't cause much problems anyway, since it appears their RFC
3484 implementation already implements section 2.7 from
draft-arifumi-6man-rfc3484-revise-02.  (You can argue that the Windows
implementation is the reference implementation, as it was Microsoft who
authored RFC 3484 to begin with.)

> To some extent that is also due to lack of standards. For example, if
> source address selection had an option to skip source addresses if
> the route to the destination is down, then a host with a broken IPv6
> configuration would fall back to just IPv4.

Actually, this is part of RFC 3484 already:

>    Rule 1:  Avoid unusable destinations.
>    If DB is known to be unreachable or if Source(DB) is undefined, then
>    prefer DA.  Similarly, if DA is known to be unreachable or if
>    Source(DA) is undefined, then prefer DB.

Applications will also fall back to IPv4 if the initial IPv6 connection
failed.  However for latency-sensitive applications that make use of
many short-lived connections (HTTP is a good example), this timeout is
so long that preferring defective IPv6 above working IPv4 will
essentially render the application useless.

By the way:  A curious effect of the current RFC 3484 algorithm is that
if you have both default IPv4 and IPv6 routes, RFC 1918-based IPv4
addresses, and only link-local IPv6 addresses, IPv6 will be preferred.
I'm not sure if this situation is commonly found in the wild though.

Best regards,
-- 
Tore Anderson
Redpill Linpro AS - http://www.redpill-linpro.com/
Tel: +47 21 54 41 27