[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Tony Hain wrote:
The fundamental goal of this draft is just misguided.
I read the goal (fundamental or otherwise) of the draft is to relax the
requirement on end-site assignment size,
from fixed /48 to "whatever works best for meeting customer needs,
within whatever existing policy framework is used."
If this is not how you read the fundamental goal, then please, "send text".
What do you believe the fundamental goal of the draft is?
There is no math justifying the assertion that a /56 actually solves the
collective set of goals. In particular, the scare tactic reference
[ROUTE-SCALING] has no substantiating documentation, and even cursory
thought about the problem would expose the point that moving bits from local
to global routing will only make any scaling concerns worse.
I'm not sure I understand what you mean by, "moving bits from local to
Do you mean from inside end-site space, to inside PA aggregated space?
The latter is not global. Only the PA instances are presumed global.
(It's kind of the definition of PA, as such.)
at the mic today about fragmentation due to having to get additional space
are an artifact of RIR policy, and no matter what size the IAB/IETF
recommends, the RIR policies will insist on small periodic blocks for
'efficiency' (read that as power to say no), which will result in
fragmentation over time.
The RIR implementations of current policy, reserve substantial blocks
for each PA assignment,
specifically so subsequent assignments can be aggregated together.
I don't see how that can be characterized as fragmentation.
LIRs deaggregating things that *can* be aggregated is not the same as
Fragmentation is assigning things that are not adjacent, and for which
aggregation is not
There are 281,474,976,710,656 /48's (minus special
use prefixes), so claims that we will run out are absurd.
I don't think anyone has ever claimed we'll run out of /48's.
I believe the real concern is the percentages of PA blocks that will be
exhausted and require new PA allocations,
when /48's and only /48's are assigned - as compared to mixes of /48's
and other lengths including /56's.
By my math, where Unicast space is 2000::/3, /48's within a /3 number at
2^45, or 35,184,372,088,832.
If the PA blocks are /32's, that means 2^29 /32's, or 536,870,912.
The real concern is how many PA blocks are in the DFZ, on any given
year. If we presume each new generation
supports a substantial increase in TCAM capacity, say 2^3 increase, and
a generation is 3-5 years, then we can
break down the upper limit on prefixes (i.e. PA blocks) by year,
starting with present limits (ball-park).
So, roughly speaking, let's presume every 5 years, we can get a 10-fold
increase in TCAM space.
And TCAM space today (give or take a factor of 2), is about 0.5M.
So, theoretically, FIBs could accommodate the full range of PA blocks in
15 years or so. (Optimistic view).
At the low end, let's presume a 2-fold increase in TCAM every 3 years.
That results in 30 years (pessimistic view).
Flip it around, and let's look at capacity in 10 years, when arguably
most sites that have IPv4 will also have IPv6.
Optimistic view, 50M PA blocks, pessimistic view, 4M-5M PA blocks.
At /48 per site, that's about 3.2 x 10^12 sites optimistic, or 3.2 x
10^11 sites pessimistic.
Half that to handle dual-homing, so pessimistic shows 160G sites,
optimistic 1600G sites.
But, that is *only* if each PA block is 100% utilized. What does the HD
ratio give for number of /48's per /32?
My calculations show it as being 2^(24*.86)/2^8, based on HD-56. This
simplifies to 2^(12.64), or about 6400.
And *that* is about 10% of the previous values, i.e. pessimistic of 16G
sites, optimistic of 160G sites.
The pessimistic view shows an estimated capacity of about 2 sites per
person (based on roughly 8B people in 2018).
And this is within the "factor of 2" I initially said was a give or
take. This is reason to want to reconsider the implications
of restricting the policy for end-sites to only /48.
The issue isn't waste. The issue is density. To scale adequately in the
10-30 year time frame, we simply need to ensure
that the density of end-sites per PA is at least an order of magnitude
larger. Having some, but not all, of the assignments
be something smaller, without insisting on either uniformly using a
specific number, or choosing only one other size of
prefix for assignment, is sufficient to meet that need.
And *that* is exactly what 3177bis is proposing.
What it means is that people
need to use real math justifications to back up their claims, not just vague
notions of a problem that will never exist.
Can you be specific about what problem you think that won't exist?
I'm always happy to use real math, and in particular for comparing
assumptions against conclusions.
I'm even happy to use real math to quantify problems, for purposes of
refining problem statements,
or for determining scaling factors or choices of scale (linear,
exponential, logarithmic, or other) when
comparing parameters of any problem space.
- From: "Tony Hain" <email@example.com>