[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RRG] Single Host Granularity (SHG) with full database ITRs & Query Servers



Hi Xiaohu,

In "Re: [RRG] draft-fuller-lisp-alt-01.txt" you wrote:

> Hi Robin,
>
> As Tony said, the mapping system should be flexible so that it
> can support host granularity. If that statement is true in the
> future network, do you believe the default mapper is still
> suitable to support such a huge mapping database?
>
> Best wishes, Xiaohu Xu

In short, yes - with caching ITRs supported by local full-database
Query Servers and full-database ITRs.  A longer version follows.

  Best wishes

    - Robin


All the map-encap proposals (LISP-ALT, LISP-NERD, APT and Ivip) are
perfectly capable of slicing and dicing address space down to "host
granularity", meaning 1 IPv4 address or a /64 for IPv6.

However, the "host" in this terminology is misleading.  We are
discussing IP addresses, and the single IPv4 address capable of
supporting arbitrary numbers of computers behind NAT.  A /64 of
course is capable of supporting 2^64 hosts each with their own
stable public address.

An entire business can run from a single IPv4 address, including
with mail server, web server, VPN tunnel endpoints etc. on the
public address and all the office machines behind NAT.

So SHG doesn't mean the EID or micronet is only for one device.

I think the questions are:

1 - Should we expect there to be a high demand for such small
    EIDs/micronets?

2 - Should we design the new architecture on the basis of optimising
    performance for larger micronets (shorter EID prefixes) at the
    expense of the performance if the system is ever used widely for
    "single host granularity" EIDs/micronets?

I think the answer to 1 is that we certainly should expect it.  Even
if we didn't, I would want to see extremely strong arguments to
support the introduction of a new architecture which wasn't well
suited to "Single Host Granularity" (SHG).  For instance:

A - Why this architecture is not expected to be used for SHG.

    This could be of the form that the new architecture is
    transitional and a later architecture will handle SHG - but
    we want the new architecture to last as long as possible.

    Another line of reasoning might counter the position that
    SHG will be widely needed (for instance due to use being stuck
    in IPv4 for the next few decades), by arguing that the IPv6
    adoption without the need for end-users to directly access IPv4
    is certainly going to happen "soon".  I don't accept such
    arguments, but I am sure they will continue to be made as they
    have for the last decade.

    Perhaps it could be argued that for some other reason, SHG won't
    be widely desired or needed for IPv4 or IPv6.

AND

B - That very high level of benefits to the cost or performance of
    the new architecture are only available if the design decisions
    boost performance for larger micronets (smaller EID prefixes)
    in ways which unavoidably reduce the performance for SHG.


I think some people are uncomfortable with SHG for reasons such as
they feel it is improper for single hosts, including such things as
cell-phones (increasingly multifunction email, web, music, game
devices) causing a ripple of activity in a million or more full
database ITRs across the globe just because the device moves from
ISP's one radio network to that of another.

Likewise, I think there is a concern to keep a lid on the number of
EIDs, to ensure the new architecture isn't swamped.

However, demand will be what it will be, and purposefully designing
an architecture which doesn't fit something like this will only
result in a mess.

None of us know how many end-users (however defined) will want their
networks, computers, devices, IPv6 light-switches etc. to use the
new kind of address space the map-encap scheme provides.

We are are designing an architecture to cope with the largest
possible number of such end-users, to supporting their desires and
needs to the greatest possible degree.  These desires and needs include:

*  Portability of the address space between ISPs. (The only
   way to avoid the pain, cost and disruption of renumbering -
   although one would think that renumbering from one IPv4
   address to another should be pretty easy.)

*  Multihoming (2 or more ETRs).

*  Traffic engineering for incoming traffic to come via
   a choice of ETRs, including for load sharing.  (Outgoing TE
   doesn't concern the map-encap scheme).

*  Ideally, real-time mobility: switching to another ETR or set of
   ITRs with no prior notice, and ideally no delay - in practice
   probably a few seconds - when the device, network etc. moves
   to another ISP's network.  This might be using one ETR, then
   using both for a while, and later ditching the first to use
   only the second.  Or it could be using one, and then turning
   up on another ISP and using another ETR instead.

The new architecture needs to be as clean, open, extensible as
possible - because it will no-doubt be called upon to do things we
can't imagine at present.

There's no extra cost or complexity for any of the current schemes
in supporting micronets/EIDs of a single IPv4 address or /64.

I think the fuss over "Single Host Granularity" concerns fear about
opening the floodgates to billions of individual people, at home and
with their cell-phones - leading to the 10^10 estimates for
long-term number of EIDs/micronets.

I think we need an architecture which scales to very large numbers
of EIDs/micronets - as many as we can achieve, this side of a few to
10 billion.  The question of whether they are single IPv4 addresses
or /64s IPv6 prefixes isn't important, except that it is impossible
to fit more than about 2 or 3 billion EIDs/micronets in IPv4.


Then there is concern about how a "Default Mapper" (a full database
ITR and Query Server, according to the APT definition of the term,
where I think it was first used) is going to cope with mapping for
10^10 EIDs/micronets.  Here are two lines of argument about why it
will be OK.


Firstly, we have to make it OK.  A pure pull system (ALT alone) is
never going to work.  The new kind of address space would suck and
not be widely adopted.  That leaves pure push or push to some ITRs
and Query Servers, with caching ITRs doing the rest of the work.

Pure push (NERD - with every ITR having the full database) will
involve such high costs for ITRs and the mapping traffic they
require that ITRs will be few in number, and will have to carry high
loads - with more fuss getting the traffic packets to those ITRs.

I am convinced that the only way forward is to allow for a flexible
mix of caching ITRs (Ivip's ITRCs and ITFHs in hosts) with local
full database Query Servers (QSDs in Ivip, Default Mappers in APT)
with full database ITRs (ITRDs or Default Mappers in APT) placed
wherever the end-user network operators and ISPs think is best.

Making the new architecture OK, over the next decade or two, with
very large numbers of EIDs/micronets will involve partly good
engineering and lots of expenditure - to make a great map-encap
scheme which provides address space with the low costs and little or
no loss of performance required for widespread adoption.

I think it will also involve some financial arrangement to firstly
deter end-users from making mapping changes and secondly to ensure
they pay some or all the cost of sending these changes out to a
million or more ITRDs and QSDs.

It is early days and I don't have a detailed plan for this.
However, the TLD and many second level domains which need to be run
in a government regulated way (.com.au etc.) involve end-user
payment for adding a domain.  If some such users wanted to change
the IP addresses of their nameservers every few days, or every few
minutes, the DNS registry industry would soon have developed a
charging system to deter such actions and to collect the revenue to
pay for these changes being implemented.

So to the extent that large (millions and billions) of separate
mapping entries and changes to these are a burden on the map-encap
scheme, there needs to be a system by which end-users of these
EIDs/micronets pay.

This need not be very expensive.

If there are a million ITRDs and QSDs, each of which must receive
the mapping update, and the update itself consists of only 12 bytes
(plus protocol overhead) - as it does with IPv4 Ivip - then this
involves delivering about 12 megabytes of data globally.  It
involves each ITRs and QSDs storing about 12 bytes and doing some
processing of that data.

Ivip uses 12 bytes: the IPv4 micronet's start address, its length
and the IP address of its ETR.  IPv6 would be up to four times more
information.  LISP and APT involve more complex mapping information.

I am thinking that the end-user's cost of changing the mapping will
be in the order of the expense of a short phone call today - say 10
to 50 cents.  That should pay, or help pay, for the pushing of data.


The second reason why it will be OK is that by the time the mapping
database grows past a few hundred million, it will be a lot more OK
to have such large (by 2008 standards) amounts of data being pushed
around the Net than it is today.

12 MB of data carriage on the Internet is not particularly expensive
now.  Costs within the network (not counting DSL costs, ISP
help-desk support for retail customers etc.) are generally well
below a cent a megabyte.  So in terms of data transmission, the cost
is low.  It will be much cheaper by the time the scheme is handling
hundreds of millions of mappings.  A well designed push system is
optimised for its job - it is not like changing a prefix in BGP and
waiting for all the routers to adjust to this change via a ripple of
chattering inter-peer messages.

Storage is an issue - assuming it is in RAM.  On a hard drive, it is
not such a problem.  12 bytes by 10^10 is 120 gigabytes - so each
ITR can easily store the database on disk with no significant
incremental cost.

RAM will probably continue to get cheaper.  In 1980 is was about
$10^4 per megabyte.  28 years later it is about $10^-1 a megabyte.
CPU power is cheap and getting cheaper too.

If there are a million full database ITRs and Query servers, then
every new micronet in Ivip requires a global addition of about 12
megabytes of RAM.  This is currently about 50 cents at current
retail prices.   So it might cost an end-user a few dollars to
establish a new micronet (or to split a current one into two) and
then it might cost 20 cents or so to change the mapping.  On this
basis, the push scheme and some of the cost of purchasing and
running the ITRDs and QSDs could be paid for on a profitable basis
by the micronets' end-users.

 - Robin


--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg