[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RRG] Push mapping for ITRDs and QSCs



I am replying to Scott Brim's message 713 (Thoughts on the
RRG/Routing Space Problem) regarding push ITRs and anycast terminology.

Hi Scott,

You wrote, quoting me, quoting you:

>>> While core routing/forwarding is untouched by granularity,
>>> there will be more prefix mappings in ITRs.  There are
>>> several things that mitigate that.  First, if there is any
>>> "pull" at all in the mapping system, an ITR only has mappings
>>> it uses, so small sites that only connect to a few places can
>>> have small ITRs.  Second, an ITR need not cache everything if
>>> there is a mapping node nearby that will do the caching for
>>> it.  Third ... well, I just woke up with melatonin in my
>>> brain, so it isn't quite working 100% yet, but there are
>>> mitigation techniques in all the various schemes.
>>
>> Another mitigation technique is a full database (push) ITR
>> reducing the cost of handling every mapping update in the world
>> by updating its RIB with only those updates which are required
>> by the current traffic.
>
> So the full database is pushed to start with, and incremental
> updates follow based on observed traffic?  To start with, we
> should assume at least hundreds of millions of prefix mappings.
> Second, regarding the incremental updates, how is the incremental
> update triggered?

While I will contemplate more complex arrangements in the future,
such as push of some MABs' mapping and pull of others, for now, Ivip
involves a simple push of all mapping changes to all full database
ITRDs and QSDs (Query Servers).  This is a lot simpler than having
the mapping distribution system splitting up the data and fussing
around sending different things to different devices.

For IPv4 Ivip, a complete item of mapping information is only 12
bytes: micronet start and length, and the ITR's address.  Bandwidth
is cheap, even to a few hundred thousand or a million ITRDs and QSDs
- we might as well send the whole lot.  It will be a few months
before I can provide more concrete proposals on doing this securely
and robustly.  It would be a continual stream of updates, with each
Root Update Authorisation Server (RUAS) which is authoritative for
one or probably many MABs, sending out a complete set of updates
every second or so.  So each ITRD or QSD would get the sum of all
RUASs' updates, a second or two later (ideally).  By then, it would
be a pretty smooth stream of packets.


> Does the ITR ask for an update when it notices
> it is sending packets to a prefix for which it has a stale
> mapping entry?  If so, how is this different from an LRU cache?

No.  An ITRD gets the full update stream.  So does a QSD.

Caching ITRs - ITRCs and ITFHs (Ingress Tunnel Function in Hosts) -
make queries to a local QSD, perhaps via one or more levels of
caching query server (QSC).  "Local" means ideally in the same
network, or at least in a network which is close and well connected.
 People wouldn't allow unauthorised access to their query servers,
so in the event that a network had no QSD, they would need to
arrange with some nearby network to access their QSDs.  There should
be alternative arrangements if one QSD is unreachable, of course.

By the time Ivip (or whatever does something like this) is operating
with 10 million micronets, bandwidth will be cheaper than it is
today.  Ignoring mobility for a moment, if there are 10M micronets,
each changing their mapping once a month, we can calculate the raw
data rate of IPv4 updates easily: 3.85 updates a second.  This is a
trickle of data on average: 47 bytes a second, not counting protocol
overhead.

By the time we get to 250 million micronets, bandwidth will be
cheaper still.

Also, this flow of data is in some redundant tree-like fanning out
system, not unlike multicast.  So if there are 50,000 ITRDs and QSDs
in Australia, it is not like they each have to get their own feed
over the fibres which cross the Pacific or lead north to Asia.
Assuming all the RUASes were outside Australia, only a handful of
redundant streams would enter Australia and these would be
"replicated" to serve all these ITRDs.

Many multihomed networks wouldn't need to change from one month to
the next, unless they were doing traffic engineering, and most of
the small ones wouldn't be fussed with that, I think.

Using Ivip to support mobility would introduce many more changes.
Assuming the mobile device was a laptop or cellphone, there would be
a mapping change every time the device preferred one access network
(such as the ISP which operates one or more adjacent WiFi hotspots)
over the one it was using (such as the ISP who provide Net access
via a 3G cellphone network).  So it is not necessarily a mapping
change every time a new 3G base station or WiFi access point is used.

I think there needs to be some kind of charge for those who change
their mapping every few minutes or whatever, since, with a million
ITRDs and ITRCs, each such change generates 12 Megabytes (not
counting protocol overhead) of traffic to them all, collectively.

There's plenty of scope for dealing with data rates like this in the
modern world.  I think LISP and the other proposals are mistaken in
assuming that push involves impractically large traffic volumes.

What I wrote about not updating the FIB for every update which
arrives in an ITRD is firstly to save computing resources, secondly
to reduce RAM requirements in the RIB and thirdly so as not to halt
the FIB's forwarding operations every time an update comes in -
which might be a problem with some FIB technologies.


> Does some upstream router notice, and tell the mapping system to
> push an update?  If so, why bother when the ITR itself knows best
> what it needs?

Ivip as currently described enable people to place ITRDs, ITRCs and
ITFHs wherever they like (not behind NAT) along with QSDs and QSCs
to support them.  I won't repeat what I wrote in the ID:

  http://tools.ietf.org/html/draft-whittle-ivip-arch-00

pages 15, 16, 38 to 42 and for examples of ITR deployment, pages 71
to 76.


>> A MAB is advertised by many ITRs - I think "anycast" is an
>> appropriate term for this:
>
> The appropriate term is "routing" :-).

LISP Proxy Tunnel Routers are described in:

  http://www.1-4-5.net/~dmm/draft-lewis-lisp-interworking-00.txt

as:

  ... the same address might be announced by multiple PTRs in
  order to share the traffic by using the IP Anycast technique.
                                             -------

> Common use of anycast overwhelmingly refers to routing and
> forwarding to an endpoint.

I discussed anycast terminology in the recent thread "LISP and IP
Interworking - Anycast PTRs == Ivip".  My conclusion in

  http://psg.com/lists/rrg/2007/msg00683.html

was:

   I think what we are doing in Ivip and LISP-PTRs is close enough
   to "anycast" as it is generally understood, and distinct enough
   from everything else, that it is best to adapt the currently
   rather loose and informal definition of "anycast" to include what
   we are doing.

Please see that message for references to RFCs and a BCP.


>> But that doesn't require any new distinction beyond the fact
>> that there would be some list or database in the ITR-ETR system
>> listing all the MABs.
>
> This is not required either.

I think ITRs need to make a clear distinction between those
addresses which they ignore (for the purposes of tunneling to an
ETR) when found in the destination of a traffic packet and those
which are in ranges where the packet can only be delivered to the
destination host via some ITR using mapping data to tunnel the
packet to the correct ETR.

Maybe not every ITR handles every MAB, but assuming there is a
single global map&encaps / ITR-ETR system, this system will have at
its core a list of which areas of the address space are MABs.

 - Robin

--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg