[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Static LSP Configuration
Sudheer,
I am taking the liberty of attaching e-mail (on MPLS WG) from Neil Harrison
of BT on the proposal. He has some excellent comments/perspective (among
others) on the 2x BW issue (see his point 1). I share these viewpoints.
Failure detection/reporting avoidance is only one of the advantages of the
proposal (and highlighted in the context of the discussion on this thread).
There are many other advantages that we can discuss at length.
ramesh.
> > From: Harrison,N,Neil,IKO1 R
> > Sent: 24 April 2002 13:20
> > To: IETF MPLS WG (E-mail)
> > Subject: comments on nagarajan-ccamp-mpls-packet-protection-00.txt
> >
> > I noted George S asked for comments on
> > nagarajan-ccamp-mpls-packet-protection-00.txt to be taken
> on the list in his
> > notes of the MPLS WG mtg.....so here are some 1st pass comments:
> >
> > 1 I noted a criticism in the mtg notes that it uses
> 2x the BW of one
> > LSP....well yes it is 1+1 so I'd say that's obvious.
> However, I think we
> > have to place any such criticism of BW efficiency in
> context. For example,
> > if I compare it with LDP as a server layer to (say) rfc2547
> VPNs, then
> > because there is no relationship between a pkt's up-state
> QoS forwarding
> > treatment and a pkt's survivability requirements (vis-à-vis same or
> > different DS-coded pkts of *any* VPN), operators are forced
> to over-engineer
> > such networks and *hope* (because there is no assurance)
> that traffic
> > survives under failures....a factor of 2x over-engineering
> on some DS
> > classes is not uncommon.
> >
> > Hence, the point I want to make here is that using BW wisely to
> > reduce/remove a complexity/problem is one thing (which I
> support), but
> > asking an operator to throw BW at a problem that should not
> really exist
> > (because the application/problem in question has only been partially
> > defined, eg just the connectivity bit of VPNs) is something
> else. So any
> > criticism of BW efficiency only makes sense against the
> context of the
> > application/problem it is addressing IMO.
> >
> > 2 I noted in section 6.2 you address the practical
> issue of carrying
> > the sequence number (if implemented in such a way), and
> suggest that it sits
> > directly below the shim header (1st 4 octets of payload).
> This is possible.
> > 2 things spring to mind here:
> >
> > - IMO you really need some way to be sure you have correct A-B
> > connectivity....and this includes defects where perhaps
> another LSP, LSP X
> > say, gets merged unintentionally into the LSP Y between
> A-B. In this case
> > you would get some unexpected results looking for a
> sequence number in such
> > mismerged pkts. I would therefore recommend you run a
> periodic data-plane
> > LSP CV OAM flow on each of the LSPs to verify correct
> connectivity, since
> > these contain unique LSP source identifiers and will detect
> all defects, not
> > just simple breaks (and if following Y.1711, it will also
> invoke the correct
> > consequent actions on failure).
> >
> > - You will need to configure the LSP sink points to
> expect LSPs
> > containing sequence numbers. This can be done manually of
> course, but you
> > may want to consider some auto signalling config...and I
> later noted that
> > you have considered this aspect in section 6.4 wrt RSVP
> say. A further way
> > to achieve this signalling function could be by the use of
> special OAM
> > pkts....and its something being considered by those who are
> working on MPLS
> > OAM in Y.1711 for automatic CV activation/deactivation.
> >
> > 3 Given that the scheme you are proposing seems to
> fit where one has a
> > critical LSP connectivity application (like a
> control/management-channel
> > say), then I think you ought to be running some solid OAM fault
> > detection/handling function on it in any case....I gave one
> example above
> > wrt mismerging and seeing arbitrary sequence number
> effects. Moreover, I
> > noted later you gave an algorithm for processing the sequence number
> > implementation case at the egress that is associated with a
> sliding window.
> > I would suggest that this needs coupling with defect
> detection mechanisms in
> > order to make the correct processing decisions.....one
> obvious cut-off point
> > is when you would consider one of the LSPs to have become
> 'unavailable'
> > (this is also defined in Y.1711).
> >
> > 4 Final comment....simple ideas are often the best,
> and this idea is
> > certainly simple in principle. I think it could find
> excellent uses for
> > critical applications.
> >
> > regards, Neil
> >
>
----------------------------------------------------------------------------
--------------------------------------------------
Room 3M-335, Bell labs
Tel: 732-949-2761
101 Crawfords Corner Road
Fax: 732-834-5906
Holmdel, NJ 07733.
e-mail: rameshn@lucent.com.
----------------------------------------------------------------------------
---------------------------------------------------
> ----------
> From: Sudheer Dharanikota[SMTP:sudheer@nayna.com]
> Sent: Monday, May 06, 2002 6:41 PM
> To: Nagarajan, Ramesh (Ramesh)
> Cc: Shahram Davari; 'curtis@fictitious.org'; Gary Tam; Ajay Simha; Ron
> Bonica; Brijesh Kumar; mpls@UU.NET; CCAMP WG
> Subject: Re: Static LSP Configuration
>
> Hi Ramesh:
>
>
>
> "Nagarajan, Ramesh (Ramesh)" wrote:
>
> > Sudheer, Shahram,
> >
> > I agree. *Fast* failure detection and reporting are challenging issues
> even
> > when we consider physical layer failures (fiber cut etc.,) only. If you
> > throw in soft failures of many kinds it becomes even more challenging.
> > This was one of the motivations, among many others, in developing a
> > restoration scheme/proposal (link below for convenience) which provides
> > broad coverage for many failures without requiring failure detection and
> > reporting. This will work across the recent L2/L1 technologies that you
> > refer to which lack either proper failure detection and/or reporting.
> >
> > ramesh.
> >
> >
> http://www.ietf.org/internet-drafts/draft-nagarajan-ccamp-mpls-packet-prot
> ec
> > tion-00.txt
> >
>
> Although I agree that 1+1 mechanisms may solve (to be precise avoid)
> such problems, they are very resource intensive. Also most of the
> data applications do not need the recovery times that you can support
> by 1+1. In my opinion, the challenge is to support less resource intensive
> but reasinably faster restoration times for data applications by reusing
> the under-lying technologies.
>
> Regards,
>
> sudheer
>
> >
> >
> --------------------------------------------------------------------------
> --
> > --------------------------------------------------
> > Room 3M-335, Bell labs
> > Tel: 732-949-2761
> > 101 Crawfords Corner Road
> > Fax: 732-834-5906
> > Holmdel, NJ 07733.
> > e-mail: rameshn@lucent.com.
> >
> --------------------------------------------------------------------------
> --
> > ---------------------------------------------------
> >
> > > ----------
> > > From: Sudheer Dharanikota[SMTP:sudheer@nayna.com]
> > > Sent: Monday, May 06, 2002 5:27 PM
> > > To: Shahram Davari
> > > Cc: 'curtis@fictitious.org'; Gary Tam; Ajay Simha; Ron Bonica;
> Brijesh
> > > Kumar; mpls@UU.NET; CCAMP WG
> > > Subject: Re: Static LSP Configuration
> > >
> > >
> > >
> > > Shahram Davari wrote:
> > >
> > > > Curtis,
> > > >
> > > > > -----Original Message-----
> > > > > From: Curtis Villamizar [mailto:curtis@workhorse.fictitious.org]
> > > > > Sent: Monday, May 06, 2002 4:31 PM
> > > > > To: Sudheer Dharanikota
> > > > > Cc: Gary Tam; Ajay Simha; curtis@fictitious.org; Ron Bonica;
> Brijesh
> > > > > Kumar; 'Curtis Villamizar '; mpls@UU.NET; CCAMP WG
> > > > > Subject: Re: Static LSP Configuration
> > > > >
> > > > >
> > > > >
> > > > > In message <3CD6C43C.94BACCEE@nayna.com>, Sudheer Dharanikota
> writes:
> > > > > >
> > > > > > Transport networks are *built* to provide 50 msec recovery
> > > > > > times (e.g., rings, span, 1+1 etc). Hence a failures are
> > > > > scoped to be
> > > > > > within a domain (typically a ring or a span or in the worst case
> > > > > > end-to-end[1+1 case]). Excluding failure detection time in this
> > > > > > 50 msec (taking from SONET knowledge), failure reporting
> > > > > > and recovery should take 50 msec. Obviously this cannot
> > > > > > be done with OSS intervention. One can use SONET's
> > > > > > inband signaling mechanisms for failure reporting or use
> > > > > > *directed notification* mechanisms such as the ones proposed
> > > > > > for signaling protocols. Note that if we assume a ring topology
> then
> > > > > > the messages are only one hop away (with the assumption that
> > > > > > only the DCS are the intelligent devices). So your worry of 5-10
> > > > > > hop path is not valid in my opinion.
> > > > > >
> > > > > > Regards,
> > > > > >
> > > > > > sudheer
> > > > >
> > > > >
> > > > > The discussion was about MPLS but an underlying assumption seemded
> to
> > > > > enter into the thread with that last email that either APS was
> running
> > > > > under MPLS or L2 was the application and APS was running over MPLS
> > > > > provided over multiple end to end L2 tunnels.
> > > > >
> > > > > I don't know of anyone doing SONET APS over the L2 tunnels of an
> MPLS
> > > > > backbone.
> > > > >
> > > > > I do know of providers who want to eliminate the APS on the sonet
> > > > > links under their MPLS network cutting some costs significantly
> (but
> > > > > with all things considered, not in half).
> > > >
> > > > What mechanism will they use to detect the failure then?
> > >
> > > With most of the *recent* L2/L1 technologies, failure dection is not
> the
> > > only problem.
> > > Without APS-like mechanisms (sub-second) failure reporting is also a
> major
> > > problem.
> > > In my opinion, for the sake of the underlying technologies which does
> not
> > > have
> > > APS-like reporting mechanisms, we need to optimize our signaling
> protocols
> > > and propose sensible control plane topologies.
> > >
> > > Regards,
> > >
> > > sudheer
> > >
> > > >
> > > >
> > > > -Shahram
> > > >
> > > > MPLS restoration needs to
> > > > > work quite well to do this and still meet SLAs and carrying L2
> service
> > > > > is even more demanding.
> > > > >
> > > > > Curtis
> > > > >
> > >
> > >
>