[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Some comments on Hierarchy and Restoration draft



On Wed, 12 Sep 2001, Rob Coltun wrote:

> All - here are some comments from Steve Plote of Looking Glass Networks.
> See comments in <<>> below. I also gave some comments on organization - has
> a new version been sent yet? (have I missed it)?

didja now? I don't see them in the archive
      http://ops.ietf.org/lists/tewg-dt/tewg-dt.2001/

can you resend yours?  Here's my feedback on Steve's stuff - tell him
thanks.

>
>    5.2.1   1:1 Path Protection with Pre-Established Capacity
>
>      In this protection mode, the head end of a working connection
>      establishes a protection connection to the destination.  In normal

Insert sentence: (see comments below)

       There should be the ability to maintain relative priorities
       between working and protect LSPs, as well as between different
       classes of protect LSPs.

>      operation, traffic is only sent on the working connection, though
>      the ability to signal that traffic will be sent on both connections
>      (1+1 Path for signaling purposes) would be valuable in non-packet
>      networks.  Some distinction between working and protection
>      connections is likely, either through explicit objects, or
>      preferably through implicit methods such as general classes or
>      priorities.  Head ends need the ability to create connections that
>      are as failure disjoint as possible from each other.  This would
>      require SRG information that can be generally assigned to either
>      nodes or links and propagated through the control or management
>      plane.  In this mechanism, capacity in the protection connection is
>      pre-established, however it can be used to carry preemptable extra
>      traffic.  Protect capacity is first come first served.  When protect

       ^^^--- traffic in non-packet networks.

>      capacity is called into service during restoration, there should be
>      the ability to promote the protection connection to working status
>      (for non-revertive mode operation) with some form of make-before-
>      break capability.
>    <<Protect capacity should not be assigned on a first come first served basis but
>    based upon the service priority groupings.... ie Mission Critical traffic gets first
>    restoration priority.  Every other service type waits.  Then the 2nd priority traffic
>    get restored, etc.  Obviously this does not apply for a 1+1 protection implementation.>>

I 100% agree, and you can see a hint of that by refering to the use of
generic classes of LSPs to backup other LSPs.  In my view, one might
do something clever like....

   Priority=2  great_stuff working LSPs
   Priority=3  good_stuff working LSPs
   Priority=4  good_stuff backup LSPs
   Priority=5  good_stuff backup LSPs

I'm sure that's a simple-farmers-view.  But yes, I agree that
protection LSPs are not to be first come first serv. I suggested a
sentence above.

>
>    5.2.3   Local Restoration
>
>      Due to the time impact of signal propagation, path-based approaches
>      may not be able to meet the service requirements desired in some
>      networks.   The solution to this is to restore connectivity in
>      immediate proximity to the fault.  At a minimum, this approach
>      should be able to protect against connectivity-type SRGs, though
>      protecting against node-based SRGs might be worthwhile.  After local
>      restoration is in place, it is likely that head end systems would
>      later perform some path-level re-grooming.  Head end systems must
>      have some control as to whether their connections are candidates for
>      or excluded from local restoration.
>    << Easy way to make sure connections are excluded from local restoration is
>    to make the best effort and pre-emptible.  That way they only get restored
>    if there is bandwidth available >>

It's likely that this will have to do w/ an object in the signalling.

>
>    5.2.4   Path Restoration
>
>      In this approach, connections that are impacted by a fault are
>      rerouted by the originating network element upon notification of
>      connection failure.  This approach does not involve any new
>      mechanisms.  It merely is a mention of another common approach to
>      protecting against faults in a network.
>    << Sourced based routing is most efficient for network resources; but typically longer
>    restoration times.>>
>

yep, just mentioned it as a freeby.

>    5.3 Applications Supported
>
>      With service continuity under failure as a goal, a network is
>      "survivable" if, in the face of a network failure, connectivity is
>      interrupted for a brief period and then restored before the network
>      failure ends.  The length of this interrupted period is dependent on
>      the application supported.  Here are some typical applications that
>      need to be considered:
>
>      - Best-effort data: restoration of network connectivity by rerouting
>        at the IP layer would be sufficient
>      - Premium data service: need to meet TCP or application protocol
>        timer requirements
>      - Voice: call cutoff is in the range of 140 msec to 2 sec
>      - Other real-time service (e.g., streaming, fax)
>      - Mission-critical applications

>    << Mission Critical applications should only be supported with 1+1 or 1:1
>    and local restoration schemes.  Voice has been presumed to
>    restore within 50ms.>>

See comments on tewg list about "50ms" - it's a number w/ little
justification, that's why the voice above is more reasonable (As is
common in industry).  On the mission critical, I never quite liked
that.  Stock quotes, though potentially real time (and other issues
such as delivery to all parties at same time, etc...), is mission
critical, but so is emailing critical info.  Usually when people refer
to mission critical, i think they are talking about remote database
synchronization or access.  Anyways, classifying applications and
specifying their requirements is a rats-nest.  Suffice it to say (as
we do in 5.4) that we have 3 speeds, mediocre, fast, and darn-fast,
and that does the trick for 99.44% of the applications.

>
>    5.4 Timing Bounds for Service Restoration
>

>      1:1 path protection with pre-established capacity:   100-500 ms
>      1:1 path protection with pre-planned capacity:       100-750 ms
>      Local restoration:                                   50 ms
>      Path restoration:                                    1-5
>      seconds

btw - 1+1 path is < 50ms, but no need to mention.  1:1 pre-planneed
would be lucky to be 100 ms, more like 3-400 ms, not to mention the
high potential for call blocking due to information being stale....
As this is really a tweak on Path Restoration - should we roll it into
that section?

>
>      To ensure that the service requirements for different applications
>      can be met within the above timing bounds, restoration priority is
>      used to determine the order in which connections are restored (to
>      minimize service restoration time as well as to gain access to
>      available spare capacity).  For example, mission critical
>      applications may require high restoration priority.  Preemption
>      priority should only be used in the event that all connections
>      cannot be restored, in which case connections with lower preemption
>      priority should be released.  Depending on a service provider's
>      strategy in provisioning network resources for backup, preemption
>      may or not be needed in the network.
>
>    <<Your 1:1 path protection times are assuming that there is no local restoration;
>    either at a ring level or a 1+1 or 1:1 link level.  These typically require signaling back
>    to the source node to begin the restoration process.>>

yes, as we talked about on some of the calls, there is the possibility
to layer these approaches.  Perhaps that should be specified
explicitly?  A more likely scenario (I think) is local protection
would be used with path restoration (or re-computation/signalling, w/
no need for frenzy).

>
>    5.5 Coordination Among Layers
>
>      A common design goal for multi-layered networks is to provide the
>      desired level of service in the most cost-effective manner.  The use
>      of multilayer survivability might allow the optimization of spare
>      resources through the improvement of resource utilization by sharing
>      spare capacity across different layers, though further
>      investigations are needed.  Coordination during service restoration
>      among different network layers (e.g. IP, SDH/SONET, optical layer)
>      might necessitate development of vertical hierarchy.  The benefits
>      of providing survivability mechanisms at multiple layers, and the
>      optimization of the overall approach, must be weighed with the
>      associated cost and service impacts.
>
>      A default coordination mechanism for inter-layer interaction could
>      be the use of nested timers and current SDH/SONET fault monitoring,
>      as has been done traditionally for backward compatibility.  Thus,
>      when lower-layer restoration happens in a longer time period than
>      higher-layer restoration, a hold-off timer is utilized to avoid
>      contention between the different single-layer recovery schemes.  In
>      other words, multilayer interaction is addressed by having
>      successively higher multiplexing levels operate at restoration time
>      scale greater than the next lowest layer.  Currently, if SDH/SONET
>      protection switching is used, MPLS recovery timers must wait until
>      SDH/SONET has had time to switch.
>    << Yes we agree with the need for nested timers to enact restoration schemes for
>    each layer of the hierarchy.  Do you have any proposed timer delays.  SONET/SDH can
>    be assumed to only restore at the local level; not the network
level.>>

no indication is made that there is anything more than some hold-off
timer adjustments to allow one layer to protect, if it can, or
alternatively to clamp down a bit where the layer is not expected to
protect (e.g. linear).  Most high end router and atm gear allows for
this.

>
>
>    Lai, et al              Category - Expiration                       9
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      It was felt that the current approach to coordination of
>      survivability approaches currently did not have significant
>      operational shortfalls.  These approaches include protecting traffic
>      solely at one layer (e.g. at the IP layer over linear WDM, or at the
>      SDH/SONET layer).  Where survivability mechanisms might be deployed
>      at several layers, such as when a routed network rides a SDH/SONET
>      protected network, it was felt that current coordination approaches
>      were sufficient in many cases.  One exception is the hold-off of
>      MPLS recovery until the completion of SDH/SONET protection switching
>      as described above.  This limits the recovery time of fast MPLS
>      restoration.  Also, note that failures within a layer can be guarded
>      against by techniques either in that layer or at a higher layer, but
>      not in reverse.  Thus, the optical layer cannot guard against
>      failures in the IP layer such as router system failures, line card
>      failures.
>
>    << The physical layer does protect against line card failures.  It
>      should be able to switch/restore prior to layer 3 restoration schemes.>>
>

True, w/ APS, one can say that the optical layer has protected against
linecard failure.  I think the focus here is on long distance
protection, not intra-co (or span).  But point taken.  As routers also
fail (its true!) APS is usually done across IP entities.  This means
that the physical layer does protect prior to a switch restore at
layer 3, unfortunately this is followed by a massive hickup in layer 3
routing, as the topology has changed.  Hmmm.... maybe remove those two
sentences?  ("Also, ...", "Thus, ...")



>    6.1 Historical Context
>
>      One reason for horizontal hierarchy is functionality (e.g., metro
>      versus backbone).  Geographic ????ands????duce the need for
>      interoperability and make administration and operations less
>      complex.  Using a simpler, more interoperable, survivability scheme
>      at metro/backbone boundaries is natural for many provider network
>      architectures.  In transmission networks, creating geographic
>      islands of different vendor equipment has been done for a long time
>      because multi-vendor interoperability has been difficult to achieve.
>      Traditionally, providers have to coordinate the equipment on either
>      end of a "connection," and making this interoperable reduces
>      complexity.  A provider should be able to concatenate survivability
>      mechanisms in order to provide a "protected link" to the next higher
>      level.  Think of SDH/SONET rings connecting to TDM DXCs with 1+1
>      line-layer protection between the ADM and the DXC port.  The TDM
>      connection, e.g., a DS3 is protected, but usually all equipment on
>      each SDH/SONET ring is from a single vendor.  The DXC cross
>
>    Lai, et al              Category - Expiration                      10
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      connections are controlled by the provider and the ports are
>      physically protected resulting in a highly available design.  Thus,
>      concatenation of survivability approaches can be used to cascade
>      across horizontal hierarchy.  While not perfect, it is workable in
>      the near- to mid-term until multi-vendor interoperability is
>      achieved.
>
>      While the problems associated with multi-vendor interoperability may
>      necessitate horizontal hierarchy as a practical matter (at least
>      this has been the case in TDM networks), there may be no technical
>      reason for it.  Members of the team with more experience on IP
>      networks felt there should be no need for this in core networks, or
>      even most access networks.
>
>      Some of the largest service provider networks currently run a single
>      area/level IGP.  Some service providers, as well as many large
>      enterprise networks, run multi-area OSPF to gain increases in
>      scalability.  Often, this was from an original design, so it is
>      difficult to say if the network truly required the hierarchy to
>      reach its current size.
>    << This allows local/regional restoration schemes as opposed to network level restoration
>    schemes.  More likely to converge but less likely to be bandwidth efficient.  Then there
>    are the boundary gateway nodes that have to deal with the local restoration protocol and
>    some level of hierarchical restoration protocol to enable network level restoration.>>

In the hypothetical case, yes.  PNNI allowed this, but not aware of
anyone who used hierarchical PNNI in a regional manner.  ISIS - same
deal - no one does it, if anything the new L1/L2 stuff will be used in
ways that mimic OSPF.  And OSPF usually does not split large
geographical "regions" into areas.  It usually is on a CO basis.  In
enterprises this may be different, but some of the practices I've
heard about w/ OSPF in very large enterprises are likely not something
to endorse, let alone cater to.

Anyways, we've left things in the hierarchy section rather vague,
except to point out which applications came up when people said they
needed "some of that hierarchy".


>
>      Some proposals on improved mechanisms to address network hierarchy
>      have been suggested [6, 7, 8].  This document aims to provide the
>      concrete requirements so that these and other proposals can first
>      aim to meet some limited objectives.
>
>    6.2 Applications for Horizontal Hierarchy
>
>      A primary driver for intra-domain horizontal hierarchy is signaling
>      scalability in the context of edge-to-edge VPNs, potentially across
>      traffic-engineered data networks.  There are a number of different
>      approaches to VPNs and they are currently being addressed by
>      different emerging protocols: RFC 2547bis BGP/MPLS VPNs, provider-
>      provisioned VPNs based upon MPLS tunnels (e.g., virtual routers),
>      Pseudo Wire Edge-to-edge Emulation (PWE3), etc.  These may or not
>      need explicit signaling from edge to edge, but it is a common
>      perception that in order to meet SLAs, some form of edge-to-edge
>      signaling is required.
>    << This expresses what I stated above in 6.1.  One common signalling layer would
>    reduce the complexity of the restoration task if not handled by local schemes.>>
>
>      For signaling scalability, there are probably two types of network
>      scenarios to consider:
>
>      - Large SP networks with flat routing domains where edge-to-edge
>        (MPLS) signaling as implemented today would probably not scale.
>      - Networks which would like to signal edge-to-edge, and might even
>        scale in a limited application. However, they are hierarchically
>        routed (e.g. OSPF areas) and current implementations, and
>        potentially standards prevent signaling across areas.  This
>        requires the development of signaling standards that support
>        dynamic establishment and potentially restoration of LSPs across a
>        2-level IGP hierarchy.
>
>
>
>    Lai, et al              Category - Expiration                      11
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      Scalability is concerned with the O(N^2) properties of edge-to-edge
>      signaling.  For a large network, maintaining a "connection" between
>      every edge is simply not scalable.  Even if establishing and
>      maintaining connections is feasible, there might be an impact on
>      core survivability mechanisms which would cause restoration times to
>      grow with N^2, which would be undesirable.  While some value of N
>      may be inevitable, approaches to reduce N (e.g. to pull in from the
>      edge to aggregation points) might be of value.
>
>      For routing scalability, especially in data applications, a major
>      concern is the amount of processing/state that is required in the
>      variety of network elements.  If some nodes might not be able to
>      communicate and process the state of every other node, it might be
>      preferable to limit the information.  There is one way of thought
>      that says that the amount of information contained by a horizontal
>      barrier should be significant, and that impacts this might have on
>      optimality in route selection and ability to provide global
>      survivability are accepted tradeoffs.
>
>    6.3 Horizontal Hierarchy Requirements
>
>      Mechanisms are required to allow for edge-to-edge signaling of
>      connections through a network.  The types of network scenarios
>      include large networks with a large number of edge devices and flat
>      interior routing, as well as medium to large networks which
>      currently have hierarchical interior routing such as multi-area OSPF
>      or multi-level IS-IS.  The primary context of this is edge-to-edge
>      signaling which is thought to be required to assure the SLAs for the
>      layer 2 and layer 3 VPNs that are being carried across the network.
>      Another possible context would be edge-to-edge signaling in TDM
>      SDH/SONET networks, where metro and core networks again might either
>      be in a flat or hierarchical interior routing domain.
>
>    << This capability of SDH/SONET networks providing edge-to-edge signaling will
>    not happen with the OSI 7 layer stack model elements.  This may only be feasible
>    for the IP over DCC managed elements; if we want to perform the signaling in-band.
>    There are all kinds of out of band schemes to provide this edge-to-edge connectivity.
>    The problem is having an out of band network to support this in a very large network.
>    This gets back to the DCC interoperability being the key for TDM networks to work at
>    the signaling level.  We have always assumed that MPLS will be the way to get layer2/3
>    VPN data networks to signal/interoperate.>>

yes, the sentence was meant to refer to IP controlled TDM equipment
(as it migrates from the slides to the providers), not legacy TDM gear
(unless of course, they slap in some IP, if possible).

>
>    7. Survivability and Hierarchy
>
>      When horizontal hierarchy exist in a network layer, a question
>      arises as to how survivability can be provided along a connection
>      which crosses hierarchical boundaries.
>
>      In designing protocols to meet the requirements of hierarchy, an
>      approach to consider is that boundaries are either clean, or are of
>      minimal value.  However, the concept of network elements that
>      participate on both sides of a boundary might be a consideration
>      (e.g. OSPF ABRs).  That would allow for devices on either side to
>      take an intra-area approach within their region of knowledge, and
>      for the ABR to do this in both areas, and splice the two protected
>      connections together at a common point (granted it is a common point
>      of failure now).  If the limitations of this approach start to
>      appear in operational settings, then perhaps it would be time to
>      start thinking about route-servers and signaling propagated
>      directives.  However, one initial approach might be to signal
>      through a common border router, and to consider the service as
>      protected as it consist of a concatenated set of connections which
>
>    Lai, et al              Category - Expiration                      12
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      are each protected within their area.  Another approach might be to
>      have a least common denominator mechanism at the boundary, e.g., 1+1
>      port protection.  There should also be some standardized means for a
>      survivability scheme on one side of such a boundary to communicate
>      with the scheme on the other side regarding the success or failure
>      of the service restoration action.  For example, if a part of a
>      "connection" is down on one side of such a boundary, there is no
>      need for the other side to recover from failures.
>
>    << I have worked on these issues in a single vendor network with different Hierarchies
>    of signaling between Boundary Gateway Nodes and Interior Nodes (BGN-BGN and BGN-IN) for
>    optical equipment.  The concept of running a different mesh restoration scheme at the
>    BGN lavel based upon 1+1 was a costly but effective method to provide end to end
>    restoration times that were in the 100s of milliseconds.  If there was a failure between BGNs
>    it was protected 1+1; if the failure was within a boundary, the local nodes attempted to
>    restore a path back out to the BGN and to the rest of the network.  There are lots of options
>    here; just none universal.  I guess that is why we are trying to standardize a process.>>


it's good to hear where this sort of "simplistic" approach has worked.

>
>
>      In summary, at this time, approaches that allow concatenation of
>      survivability schemes across hierarchical boundaries should provide
>      sufficient.
>
>
>    8. Security Considerations
>
>      Security is not considered in this initial version.
>
>
>    9. References
>
>
>      1  Bradner, S., "The Internet Standards Process -- Revision 3", BCP
>         9, RFC 2026, October 1996.
>
>      2  Bradner, S., "Key words for use in RFCs to Indicate Requirement
>         Levels", BCP 14, RFC 2119, March 1997
>
>      3  V. Sharma, B. Crane, K. Owens, C. Huang, F. Hellstrand, J. Weil,
>         L. Andersson, B. Jamoussi, B. Cain, S. Civanlar, and A. Chiu,
>         "Framework for MPLS-based Recovery," Internet-Draft, Work in
>         Progress, March 2001.
>
>      4  D.O. Awduche, A. Chiu, A. Elwalid, I. Widjaja, and X. Xiao, "A
>         Framework for Internet Traffic Engineering," Internet-Draft, Work
>         in Progress, May 2001.
>
>      5  N. Harrison, et al, "Requirements for OAM in MPLS Networks,"
>         Internet-Draft, Work in Progress, May 2001.
>
>      6  K. Kompella and Y. Rekhter, "Multi-area MPLS Traffic
>         Engineering," Internet-Draft, Work in Progress, March 2001.
>
>      7  G. Ash, et al, "Requirements for Multi-Area TE," Internet-Draft,
>         Work in Progress, March 2001.
>
>      8  A. Iwata, N. Fujita, G.R. Ash, and A. Farrel, "Crankback Routing
>         Extensions for MPLS Signaling," Internet-Draft, Work in Progress,
>         July 2001.
>
>
>    10.  Acknowledgments
>
>
>    Lai, et al              Category - Expiration                      13
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      A lot of the direction taken in this document, and by the team, was
>      steered by the insightful questions provided by Bala Rajagoplan,
>      Greg Bernstein, Yangguang Xu, and Avri Doria.  The set of questions
>      is attached as Appendix A in this document.
>
>
>    11. Author's Addresses
>
>      Wai Sum Lai
>      AT&T
>      200 Laurel Avenue
>      Middletown, NJ 07748, USA
>      Tel: +1 732-420-3712
>      wlai@att.com
>
>      Dave McDysan
>      WorldCom
>      22001 Loudoun County Pkwy
>      Ashburn, VA 20147, USA
>      dave.mcdysan@wcom.com
>
>      Jim Boyle
>      jimpb@nc.rr.com

       Jim Boyle
       Protocol Driven Networks
       Tel: 919.852.5160
       jboyle@pdnets.com

>
>      Malin Carlzon
>      malin@sunet.se
>
>      Rob Coltun
>      rcoltun@redback.com

which words did this guy put in? :)

>
>      Tim Griffin
>      AT&T
>      180 Park Avenue
>      Florham Park, NJ 07932, USA
>      Tel: +1 973-360-7238
>      griffin@research.att.com
>
>      Ed Kern
>      Cogent Communications
>      3413 Metzerott Rd
>      College Park, MD 20740, USA
>      Tel: +1 703-852-0522
>      ejk@tech.org

Ed - got anything better here?

>
>      Tom Reddington
>      Lucent Technologies
>      67 Whippany Rd
>      Whippany, NJ 07981, USA
>      Tel: +1 973-386-7291
>      treddington@bell-labs.com
>
>
>    Appendix A: Questions used to help develop requirements
>
>    Lai, et al              Category - Expiration                      14
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>
>      A. Definitions
>
>      1. In determining the specific requirements, the design team should
>      precisely define  the concepts "survivability", "restoration",
>      "protection", "protection switching", "recovery", "re-routing" etc.
>      and their relations. This would enable the requirements doc to
>      describe precisely which of these will be addressed.
>      In the following, the term "restoration" is used to indicate the
>      broad set of policies and mechanisms used to ensure survivability.
>
>      B. Network types and protection modes
>
>      1. What is the scope of the requirements with regard to the types of
>      networks covered? Specifically, are the following in scope:
>
>      Restoration of connections in mesh optical networks (opaque or
>      transparent)
>      Restoration of connections in hybrid mesh-ring networks
>      Restoration of LSPs in MPLS networks (composed of LSRs overlaid on a
>      transport network, e.g., optical)
>      Any other types of networks?
>      Is commonality of approach, or optimization of approach more
>      important?
>
>      2.  What are the requirements with regard to the protection modes to
>      be supported in each network type covered? (Examples of protection
>      modes include 1+1, M:N, shared mesh, UPSR, BLSR, newly defined modes
>      such as P-cycles, etc.)
>
>      3.  What are the requirements on local span (i.e., link by link)
>      protection and end-to-end protection, and the interaction between
>      them?  E.g.: what should be the granularity of connections for each
>      type (single connection, bundle of connections, etc).
>
>      C. Hierarchy
>
>      1. Vertical (between two network layers):
>          What are the requirements for the interaction between
>      restoration procedures across two network layers, when these
>      features are offered in both layers?  (Example, MPLS network
>      realized over pt-to-pt optical connections.) Under such a case,
>
>          (a) Are there any criteria to choose which layer should provide
>      protection?
>
>          (b) If both layers provide survivability features, what are the
>      requirements to coordinate these mechanisms?
>
>          (c) How is lack of current functionality of cross-layer
>      cooridnation currently hampering operations?
>
>
>
>    Lai, et al              Category - Expiration                      15
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>          (d) Would the benefits be worth additional complexity associated
>      with routing isolation (e.g. VPN, areas), security, address
>      isolation and policy / authentication processes?
>
>      2. Horizontal (between two areas or administrative subdivisions
>      within the same network layer):
>
>          (a) What are the criteria that trigger the creation of protocol
>      or administrative boundaries pertaining to restoration? (e.g.,
>      scalability?  multi-vendor interoperability? what are the practical
>      issues?)  multi-provider? Should multi-vendor necessitate
>      hierarchical seperation?
>
>          When such boundaries are defined:
>
>          (b) What are the requirements on how protection/restoration is
>      performed end-to-end across such boundaries?
>
>          (c) If different restoration mechanisms are implemented on two
>      sides of a boundary, what are the requirements on their interaction?
>
>         What is the primary driver of horizontal hierarchy? (select one)
>          - functionality (e.g. metro -v- backbone)
>          - routing scalability
>          - signaling scalability
>          - current network architecture, trying to layer on TE ontop of
>            already hiearchical network architecture
>          - routing and signalling
>
>         For signalling scalability, is it
>          - managability
>          - processing/state of network
>          - edge-to-edge N^2 type issue
>
>          For routing scalability, is it
>          - processing/state of network
>          - are you flat and want to go hierarchical
>          - or already hierarchical?
>          - data or TDM application?
>
>      D. Policy
>
>      1. What are the requirements for policy support during
>      protection/restoration,
>          e.g., restoration priority, preemption, etc.
>
>      E. Signaling Mechanisms
>
>      1. What are the requirements on the signaling transport mechanism
>      (e.g., in-band over sonet/sdh overhead bytes, out-of-band over an IP
>      network, etc.) used to communicate restoration protocol
>         messages between network elements. What are the bandwidth and
>      other requirements on the signaling channels?
>
>    Lai, et al              Category - Expiration                      16
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>
>      2. What are the requirements on fault detection/localization
>      mechanisms (which is the prelude to performing restoration
>      procedures)  in the case of opaque and transparent optical networks?
>      What are the requirements in the case of MPLS restoration?
>
>      3. What are the requirements on signaling protocols to be used in
>      restoration procedures (e.g., high priority processing, security,
>      etc).
>
>      4. Are there any requirements on the operation of restoration
>      protocols?
>
>      F. Quantitative
>
>      1. What are the quantitative requirements (e.g., latency) for
>      completing restoration under different protection modes (for both
>      local and end-to-end protection)?
>
>      G. Management
>
>      1. What information should be measured/maintained by the control
>      plane at each network element pertaining to restoration events?
>
>      2. What are the requirements for the correlation between control
>      plane and data plane failures from the restoration point of view?
>
>
>    Full Copyright Statement
>
>      "Copyright (C) The Internet Society (date). All Rights Reserved.
>      This document and translations of it may be copied and furnished to
>      others, and derivative works that comment on or otherwise explain it
>      or assist in its implmentation may be prepared, copied, published
>      and distributed, in whole or in part, without restriction of any
>      kind, provided that the above copyright notice and this paragraph
>      are included on all such copies and derivative works. However, this
>      document itself may not be modified in any way, such as by removing
>      the copyright notice or references to the Internet Society or other
>      Internet organizations, except as needed for the purpose of
>      developing Internet standards in which case the procedures for
>      copyrights defined in the Internet Standards process must be
>      followed, or as required to translate it into languages other than
>      English.
>
>      The limited permissions granted above are perpetual and will not be
>      revoked by the Internet Society or its successors or assigns.
>
>      This document and the information contained herein is provided on an
>      "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
>      TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING
>      BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
>      HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
>
>
>    Lai, et al              Category - Expiration                      17
>               Network Hierarchy and Multilayer Survivability   July 2001
>
>
>      MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>    Lai, et al              Category - Expiration                      18
>
>
>
>