[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: geo short vs long term? [Re: Geo pros and cons]
On donderdag, apr 3, 2003, at 22:09 Europe/Amsterdam, Tony Li wrote:
- The Internet is continually growing at an exponential rate.
Most people seem to peg the growth rate at 100% per year
currently. The exact number is not an issue.
- In the past, we've estimated that 10% of all sites would
multi-home. Let's assume a constant rate of 10% of the
world is an exception to the default aggregation rules
that we pick.
- From the above two, we can reason that our exception rate
is going to continue to grow exponentially. Note that
the rate of absolute growth is more of an issue than the
- Moore's law for memory suggests that memory sizes will
double about every two years. However, memory speeds will
not keep up.
So far so good.
Am I missing something here, because I always thought looking up routes
scales O(log(n)). If memory size isn't a problem, you could even use a
2^45 element array and make route lookups scale O(1).
- Packet lookups are a function of memory bandwidth, so to
sustain Internet bandwidth growth of 100% per year, we need
to also increase memory bandwidth by about 100% per year.
Using bigger, slower memories is not a realistic option.
Not for packet forwarding. The problem is processing the routing
updates. Since the number of updates scales linearly with the number of
routes, but you need to look up a route in order to process it and then
possibly do some data structure manipulation as a result, this scales
O(n*log(n)) which is not good.
- Thus, the routing table really needs to be constrained to
grow at about Moore's law for memory.
Unless we can make sure that the lines terminate before this happens.
If we assume 10% of the population will multihome, there is an
automatic limit around 1G multihomers around 2050. However, one person
may want to multihome several times by then...
- If the exceptions are growing at about 100% per year, and
the memories are growing at about 100% every TWO years, then
regardless of the starting point, the exceptions will overtake
Or we introduce another variable to bend the curve downwards. By no
longer keeping a copy of the full global routing table in every
individual router, but distributing it over a large number of routers,
we can make sure we don't run out of memory (or CPU power) in the near
- Therefore, we must find some mechanism that prevents the
exceptions from growing at 100% per year. In short, the
number of longer prefixes that are injected into routing
cannot be a constant fraction of the number of sites that
Agree. But if we can make multihomers connect to their ISPs and make
ISPs interconnect within regions multihoming no longer causes
exceptions. We don't even need all multihomers and all ISPs to conform
to this, just enough to keep us on the good side of Moore's law.
- Since everyone and their brother will want an exception
for anything that they want to do that is outside of the
norm, the norm MUST support almost every possible situation.
Multihoming, in particular, must not cause exceptions.
Even a constant percentage of multihomers must not cause
It's very nice to have a cheap long distance links and multihome to
distant ISPs, but what does that buy you if you can't be routable?
Economics is about making rational decisions, not about forcing huge
costs in one area to obtain slight savings in others.
- For reasons that I've already explained, the economics
of links in a geo system cause many sites to be exceptions.
Besides, if links from Jersey City to Palo Alto are so much cheaper
than across the Hudson, why not multihome to two ISPs in California and
be geo aggregatable rather than connect to Palo Alto and NYC and break
aggregation in the process.
- Therefore, geo addressing leads to a system that will not
scale for the long term.
The real reason geo won't scale in the long run is that at some point
the number of multihomers in a city becomes too large, and I don't
believe it is possible to do reasonable geographical aggregation within
a single city.
But let's not compare apples and oranges. I agree that geo aggregation
won't solve the long term problem. However, I does offer short term
relief and intermediate term disaster relief. If we can make every IPv6
enabled host use multiple addresses for every application within two
years, that would be much better, but I don't see this happening fast
enough that we can do without a short term solution.