[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: addition of TLV to locator ID or locator ID set
On Thu, 22 Sep 2005, Iljitsch van Beijnum wrote:
> Date: Thu, 22 Sep 2005 19:15:09 +0200
> From: Iljitsch van Beijnum <firstname.lastname@example.org>
> To: Jason Schiller <email@example.com>
> Cc: shim6-wg <firstname.lastname@example.org>
> Subject: Re: addition of TLV to locator ID or locator ID set
> On 22-sep-2005, at 18:14, Jason Schiller (email@example.com) wrote:
> > I think the biggest problem is doing something analogous to current
> > IPv4
> > BGP "best" path.
> I still don't know what you mean here. These days, 85% of the
> internet is reachable over either two or three hops. So "best" is
> largely meaningless here.
I'm not sure I agree that shortest AS path is mostly uselees because most
of the Internet is only a few ASes away. I think generally people still
think its better to transit two ASes instead of three.
> Also note that in BGP we only get to use the "best" path, while the
> shim allows using different paths in quick succession or at the same
BGP has many mechinisms to alter the loading of links. By default you get
shortest AS path in bound. You can get primary back-up using local pref
to different up stream ISPs, you can get primary back up whith a single
upstream using MED, our you can load share by splitting your announcements
across you links. You can have complex cases that combine these.
You could imagine a customer with links to two ISPs say ISP1 and ISP2,
where best path is used inbound and outbound between the customer
and both upstream ISPs. Also imagine the connection to ISP1 consists of a
pair of primary links and a pair of backup links while the links to ISP2
consist of a pair of primary links.
In this case the customer could announce the same IP block across all
links. The primary bundle to ISP1 would have a MED of 0. The secondary
bundle to ISP1 would have a MED of 10. The announcement to ISP2 would
have a MED of 0.
Sources closer to ISP1 will send traffic to the customer via
ISP1. Sources closer to ISP2 will send traffic to ISP2.
If all the links between the customer and ISP1 fail traffic will roll over
to ISP2. Likewise if both links between ISP2 and the customer fail,
traffic will roll over to ISP1.
ISP1 will load traffic using round robin across the two primary
links. ISP1 will only use the secondary links if both primary links
fail. If both primary links to ISP1 fail the ISP1 will round robin across
the two secondary links.
ISP2 will round robin across its two links.
In the other direction, the customer would learn full routes across all
links. The primary link from ISP1 would have MED set to 0 (or left
default), the secondary link from ISP1 would have a MED set to 10. The
announcement from ISP2 whould have a MED of 0 (default).
Outbound, destinations closer to ISP1 will be delivered across ISP1,
destinations closer to ISP2 will be delivered via ISP2. Equal distant
destinations will use shortest exit. Equal distant destinations where the
source is closer to the link to ISP1 will use ISP1. Equal distant
destinations where the source is closer to the link to ISP2 will use ISP2.
If all of the links to one of the ISPs goes down the traffic will rool to
the other ISP.
Traffic sent to ISP1 will round robin across the primary links. The
secondary links to ISP1 will only be used if both primary links to ISP1
fail. If the secondary links to ISP1 are used then they will be round
robin across the links.
Traffic sent to ISP2 will round robin across both links to ISP2.
How do you recreate this type of traffic delivery with shim6 (this is
just one example)?
> > At a glance this seems to have all of the
> > same problems that path failure detection has, with some additional
> > problems thrown in because you may have to test multiple paths to
> > make a
> > compairson instead of only one.
> Testing a large number of paths is exactly the situation we're
> desperately trying to avoid. _Especially_ when there is no failure in
> the first place. That's also the reason why the current assumption is
> that we don't start rewriting addresses until there is a failure.
> However, if there is a good case for doing this, we can reconsider.
> But in this case, it becomes even more important that we don't add
> overhead to packets.
I beleive shim6 needs to support something analogous to the current IPv4
traffic engineering. Many content providers have expressed that they
currently offer no content because they require multihoming and traffic
> An interesting option would be for a large service to publish
> addresses for intermediate boxes in the DNS. When these intermediate
> boxes then receive a TCP SYN, they first try to set up shim state
> toward the _real_ addresses for the service (at which time
> sophisticated load balancing decisions can be made) and then the
> initiator repeats the SYN.
We can certainly move the shim function from the host to some network
level device; router, traffic engineering server, etc...
The problem is that the current traffic engineering in IPv4 is done at the
network level based on lots of routing state. Shim6 is between two end
hosts with no routing state. Shim6 is not site multihoming, it is host
How do you add network wide traffic engineering preferences to the host to
host shim6 solution?
It seems to me there are three approaches:
1. Some how get the source end system to sort the locators in the correct
order by giving it additional information and having it do forwarding
2. Let the end systems choose what ever locator and allow the routers to
record locator set information. Allow routers to recognize a destination
address as one out of the locator set, and replace it with a different
destination address if there is a different locator which is better (or
at least forward towards a different address if a different locator is
3. Let the routers reach into the locator set exchange and add additional
information or modify the locator set exchange in some way.
Mangeling packets on the fly by routers seems a bit scary. I think that
this sort of a solution was intorduced early on and has been rejected. I
beleive the same is true for moving the shim functionality to the routers,
and for providing partial "on-demand" routing tables to end hosts was also
considered and rejected.
> (PS. I'm glad I left UUNET before they started handing out @mci.com
> email addresses...)
The problem is that of a trade off, if you reduce the routing state, and
what equal functionality then you trade routing state for forward plane
detection. Thats not a good or bad thing, its a fact of life. If the
decesion is to not solve the prolem in the routing state, then you have to
solve it in the forwarding plane.