[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] Tunnel fragmentation/reassembly for RRG map-and-encaps architectures



    > From: Tony Li <tli@cisco.com>

    > A mapping cache is a fine and tractable object IF AND ONLY IF the
    > working set at that location is tractable.

Definitely a key architectual observation, although I'd rephrase it more
explicitly (although this is no doubt what you meant with your abbreviated
version) as:

 "A mapping cache is a fine and tractable object IF AND ONLY IF the working
  set at that location is enough smaller than the full map that the savings
  are significantly larger than the overhead (complexity, etc) of having a
  cache."

So there are two questions. The first is 'is this the case here'? The second
is, 'if not, is it practical to keep the full map'? Because if the answers
are 'no' and 'no', it's back to the drawing board....


So what about the first question? I think for a lot of ITR's (and maybe
almost all), this could be true. (Although this is just a guess, and could be
way wrong. For instance, spam and port-scanning seem to be from everywhere,
to everywhere, and that will blow the caches. Although since those are all
incoming traffic, if we design the system so that incoming requests carry, as
an optimization, the reverse mapping, we'd be OK, in terms of load on the
resolution system.)

Anyway, for 'normal' users, for smaller sites, even when you integrate across
a number of users, they are communicating with only a fraction of the
endpoints in the Internet anyway.

For another (and I think this is a more important factor), as the Internet
grows and the Web has a lot more content in other languages, you'll see a
natural splitting up of the user population into communities of interest;
English-speakers aren't going to be looking at a lot of sites in Mandarin,
etc, etc.

One thing that could throw a monkey wrench into this is if we see a lot of
cross-group hosting; i.e. web servers which contain content in many
languages, so that even if the user communities are disjoint, they may be
interacting with a single shared substrate of servers.

So except for the Googles of the world, maybe caching will work?


The obvious counter-point is going to be 'well, why doesn't caching work in
the routers, then'? I think that's because (especially in core routers) the
traffic aggregation as you go up the transmission hierarchy (think of it as
the same as going from surface roads up to high-speed limited-access
highways) naturally brings up a bigger set of sources and destinations. E.g.
in the US there's a lot of 'through' traffic (q.v. all the razz-ma-tazz in
the US about eavesdropping on transit traffic from a non-US-source to a
non-US-destination).

Now, maybe I'm wrong, and caching doesn't work in edge routers either (which
are in the same places ITR's will be), in which case maybe we do have a
problem. Anyone have any insight?


    > a) I'm in favor of caches in some places and b) I'm against them in
    > others.

More detail? Remember, there are no 'core ITR's'...


    >>> You let your particular working set characteristics in your particular
    >>> location select which particular approach you use at that particular
    >>> location. All of the choices need to be integrated so that they result
    >>> in one clean mechanism.

    >> We have already made that conclusion.

    > No, what I saw was a total mess of mechanisms, with no coherency.

Is coherency in the eye of the beholder? Systems with radically different
operating points are going to look radically different.

As long as each operates well, they don't interfere with one another, and
there is a good algorithm/whatever for deciding which one to use in a given
situation, is it a problem if they are very dissimilar?

	Noel

--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg