[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Host-based may be the way to go, but network controls are neccessary
> From: "Craig A. Huegen" <email@example.com>
> A host-based, host-only multihoming solution does not give the network
> the visibility into alternate paths to a destination, and therefore
> cannot apply policy to it.
> The only point of control a network operator would have in that
> environment is some kind of policy mechanism on hosts. I don't see it
> scaling to hundreds of thousands of nodes unless the central policy
> point is a function of the network.
Your message gives me a good launch pad for a rant I've been meaning to send
in for some months now. My previous message this morning alluded to it, and
here it is (in all its glory :-).
What you're asking for is, from the user's point of view, a subtle change to
the goals of "multi-homing", from reliability to control of paths. From the
architectural/ engineering point of view, however, it's an enormous change,
one which impacts the entire architecture of the network.
Look, the whole fundamental concept of the datagram network (as far as
traffic path selection goes), from Baran on through IPv6, was "you give your
packets to the network and it gets them there as best it can". *Everything*
is impacted by this decision, not just the routing - it extends down to the
internetwork-level packet headers, and the very form of the addresses
E.g. at the time IPng was being discussed, there were two advanced routing
architectures (which could have given you the kind of capability you're
asking for) under way. Both prepared requirements documents, available as
RFC's 1668 and 1753 (the former is a bit sketchy, alas). The people involved
in both will tell you that IPv6 didn't provide what they needed. (Don't
bother looking at these documents - here in the age of photons, I think
they are both outdated.)
If you really want to be able to have meaningful control over the paths that
traffic takes, and not have traffic just take whatever path the routing
algorithm decides is current "the right thing" (usually based on a mechanism
that a) involves some sort of metric, and b) doesn't loop), you need to
completely re-architect, and then re-engineer, the entire internetwork layer.
First, you need to decide what you want your users to be able to do. Then
you have to design a routing system that will provide the users (and, to the
routing system, the corporate MIS people *are* 'the users) the information
they need. Then you have to figure out how the decisions the users make get
into the network. Only *then* can you say what will need to be in the
And trust me, the whole picture is not going to look *anything* like
"routing tables with longest match and detination addresses in the packets".
If all you really want is to be able to connect a fairly large number of
sites to two different ISP's, and have traffic still flow if one connection
is down, fine, that's one set of requirements, one that the existing
architecture can sort of handle, with a certain amount of painful changes
(like multiple addresses).
But if what you really want is much better control of where your traffic is
going, you're just bitten down on the entire RRG agenda - and a lot more,
because a routing architecture that can do that is going to be radical
enough that it's going to have ramifications everywhere.
Sure, you can probably clip off a small corner of the problem and add some
hacks which do a lot of what you want to do in that small corner (e.g. pick
the best exit gateway from your corporate network for this traffic). But
that's all it will be - just a quick kludge that fixes some specific little
goal - and, moreover, Yet One More Ugly Accretion that slowly kills the
entire architecteure by the Death of 1000 Ugly Kludges.
And, of course, that's exactly what this group will do - because doing the
Right Thing is impossible.