[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: why i should like pibs



Actually, I did some pings and found my 10ms RTT was unrealistic. 100ms is
more like it. At 100ms RTT, things get much worse for the SNMP equation for
the configuration example I described:

SNMP=2026 Seconds  (over half an hour!)
COPS=2.59 Seconds

Now we're talking 1000x faster...

> -----Original Message-----
> From: Juergen Schoenwaelder [mailto:schoenw@ibr.cs.tu-bs.de]
> Sent: Tuesday, March 19, 2002 2:40 AM
>
> Dave> Change 10000 DiffServ Filter+meter+action
> Dave> entries through a T1 line with a 10msec RTT for 48byte packets:
> Dave> SNMP=((10000*8*498)/1540000)+((2*10000)/100)=226 Seconds.
> Dave> COPS=((10000*8*(498/10))/1540000)=2.59 Seconds.  Is 100x
> Dave> improvement sufficiently better? And the multiple goes up with
> Dave> the more data you xfer. ... adding bandwidth doesn't help, it's
> Dave> that dang RTT.
> 
> Even with SNMP over UDP, you can stream set requests as long as they
> are independent of each other (which I guess is true if you populate a
> DiffServ filter table). Since you did not take TCP acks into account,
> I would say the second term is in equation (1) is 0 if you are smart
> enough. With SNMP over TCP, the difference also boils down to the
> reduced OID overhead in COPS-PR, which is probably not that much an
> issue on the T1 link. Anyway, I agree with other folks that it is
> pointless to redo all the discussions of the past so I better stop
> this.
> 
[Dave] TCP ACKs piggyback on messages going the other way. TCP is quite
efficient in that way. Actually, I think I was far too kind in my
calculations. Just the other day an operator said SNMP implementations can
take hours to update his big BGP tables. Clearly with SNMP, results will
vary. While with COPS-PR, it's the TCP stack that matters, and that makes it
a no-brainer for implementations. Remember also that COPS-PR presents a
transactional model to the user, while SNMP simply provides a get/set
interface, so it's not just an implementation issue; it's a presentation
issue as well.

[Dave] The SNMP example gets far worse when you have to consider the
multiple manager problem. Now you will have to hold each of the 30000 row
status variables until the transaction completes, and reset them thereafter.
In such cases you also will have to constantly check that some other manager
didn't come and munge your configuration. This quickly becomes an
unmanageable situation, and I cannot write an equation that is long enough
to adequately express that problem.

> From my perspective, the major technical contribution of COPS-PR over
> SNMP is:
> 
> a) TCP transport for large transactions when the network is up and
>    running.
> 
> b) Reduced OID overhead and less degrees of freedom to achieve the same
>    thing.
> 
> c) Slightly improved data definition language - but nothing which gives
>    us real reusable and extensible data structures.
> 
[Dave] I disagree. You certainly do get full reusability and extensible data
structures. The reusability is both syntactic and semantic.

> d) State sharing and one manager assumption.
> 
[Dave] One manager for its set of instance data (so there is no danger of
configuration corruption). The COPS-PR protocol supports 64000 managers per
device. 

> e) Some failover support built into the protoco.
> 
> Items a) and b) are technically easy to support in SNMP. The real
> problem here is the "SNMP community process" which is in blocking mode
> for several years now for various non-technical reasons.
> 
[Dave] By the time you integrate all the COPS-PR features and advancements
into SNMP, you still get COPS-PR, just 5 years too late. But, perhaps the
best reason of all for COPS-PR and PIBs is that you don't have the "SNMP
community process" that you just described.

> Item c) contains some minor enhancements over SMIv2 but if people want
> a really improved data definition language, then the SMIng WG is the
> place to go (sure, I am biased on this ;-).
> 
[Dave]Yes, go to the SMIng WG, that's where the good stuff is being done ;-)