[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

comments on <draft-durham-nim-reqs-00.txt>




Here are some comments on <draft-durham-nim-reqs-00.txt>. In general,
I believe it is more important to build upwards that downwards. That
is, I believe that for example we need to evolve the current SMI to
make it more powerful, easier to use and the MIB definition easier to
understand and implement. 

Creating a clean room technology unspecific data modeling language out
of the blue is IMHO not too likely to help us solving any real-world
problems. All the experience with similar ideas in the past did not
really succeed to improve the world I am living in.

In my view, you can only succeed if you understand the technology
constraints and you have proper mappings in place. So in order to
better support modeling, try to lift the power of the technology
specific modeling languages we have lots of experience with and try to
align things wherever possible so that mappings are simplified as much
as possible. And of course: Limit the number of new modeling languages
you introduce since every new language may add to the overall problem.
But I am not going to elaborate on this more since the purpose of this
email is to air my comments on the NIM requirements document and not
to bore you with my personal views.

- The abstract makes the following important statement:

    Irrespective of the 
    number of technologies that are deployed to manage networking 
    devices, the industry would benefit from a more consistent 
    specification of data structures across these technologies and the 
    IETF would benefit from a reduction in the costs associated with 
    defining these data structures. 

  Let us look at the IETF side. When does the cost reduction in the
  IETF happen? To make NIM a success, the IETF first needs to invest
  human resources in the development of NIM itself. The IETF then
  needs to invest resources to develop mappings to derive technology
  specific data models. The IETF also needs to educate people in the
  various working groups so that they can define NIM modules (as they
  do MIB modules now). Further, the IETF needs to gain experiences and
  it must develop quality review rules in order to achieve consist and
  high quality NIM modules. All this requires some non-trivial amount
  of smart human resources and it will take some time to complete.
  Once this is all in place, the IETF may indeed benefit from some
  cost reduction. However, it is not clear for me when the break even
  point will be reached. What do you estimate? 5 years from now? 10
  years from now?

  What bothers me a bit more is that the calculation above assumes
  that the NIM will be successful and finally replace all the
  technology specific data modeling work in the IETF. If, for some
  reason, this assumption is not be true and NIM is not going to
  replace everything, then the IETF has just added another modeling
  language they need to take care of without any direct gains. In
  other words: Work on NIM should IMHO only be started within the IETF
  if there is a valid reasons to assume that NIM will succeed and that
  the IETF can replace all the data modeling work being done today
  within the IETF WGs with NIM modeling in the future.

  Note also that all other prior attempts to define high-level
  information models (e.g. GDMO) did not generally succeed. Further,
  the areas where they are successful are usually bound and if you
  look at the inheritance relationships, then things tend to be flat.
  (Which can be seen as feature since it allows to decouple work
  items. Too many interdependencies will lead to a nightmare if it
  comes to advancement on the IETF standards track.)

  Even CIM did not really help me (as an end user) to better manage 
  my PC. So what is really needed is a sound business case why
  information modeling work really pays off, especially in a very
  dynamic environment as the IETF. Without a sound business case
  backed up by real data from other successful projects, I have
  personally doubts that you can create such a nice clean room 
  design tool in our more and more quick and dirty Internet.

- The abstract contains the following statement:
  
    The 
    purpose of an information model is to represent data in a manner 
    that is independent of technology and implementation

  I think this is somewhat self contradictory. NIM itself will be a
  technology with some features included and other features excluded.
  Thus, NIM will itself be technology dependent. In fact, you can't
  write something down in a purely technology independent way if you
  want to be able to process it by computer programs.

- The first paragraph in the introduction says:

    In the traditional IETF 
    operations and management model, the working group developing the 
    technology has also defined the data elements (usually in the form 
    of MIBs) necessary to monitor and manage the technology. While this 
    has been very successful, it has also led to a divergence of 
    definitions and structures. In many cases, similar management 
    elements that are to be applied to a specific technology have been 
    completely reinvented in different working groups because of a lack 
    of awareness. 

  I think the only option you have is to do the modeling work within
  the working groups that develop a technology since these are the
  technology experts. Are you proposing not to do the information
  modeling in the working group which develops a technology? If not,
  how does NIM prevent people from reinventing wheels in different
  WGs? I think the only viable answer is to have process rules (such
  as the IESG MIB review process) which helps to keep the number of
  reinvented wheels within limits and a way to define more standard
  constructs when you need them. This will be true for whatever
  modeling language you use, whether that is SMI or NIM.

    In other cases, duplication has occurred because of a 
    subtle variation in the definition of the element or because the 
    relationship between the data element and other elements is new or 
    different. 

  Again, this is something we can't avoid. Technology evolves pretty
  fast and these subtle variations and new or different relationships
  are our everyday business. Can NIM help to solve this problem? I
  guess not. (See also my comment below regarding the risks of over
  specifications.)

    Divergence of management definitions and structures has 
    resulted in an environment where programming effort is spent in the 
    repetitive correlation, translation and data organization associated 
    with each data management technology, instead of in doing actual 
    management. 

  Does NIM help? Will NIM be the kickoff where people really start
  focusing on solving management tasks by inventing smart algorithms?
  Or is it just the beginning of the next round of iteration where we
  spend a significant amount of our time talking about a vision of an
  integrated data driven management system which we will never be able
  to build?

  To summarize this point: The first paragraph lists problems that I
  take serious. However, it does not say if and how NIM can solve the
  underlying questions. So either explain how NIM is an answer to the
  questions or remove the whole paragraph. As it stands now, a not so
  careful reader might be misleaded to believe that NIM is indeed the
  solution to the underlying problems - which I think is not true.

- The second paragraph talks about three major benefits, but only
  lists two. What is the third one?

- The fifth paragraph says:

    Just as MIB-I and MIB-II 
    successfully standardized well-known constructs in the SNMP data 
    model, this document will define the requirements for a more broadly 
    applicable information model.

  I do not understand this. In fact, we learned that MIB-II had to be
  broken into smaller pieces in order to evolve the definitions over
  time. So you can actually use MIB-II as an argument why a common
  global model does not work in practice.

- Motivation, 1st problem:

    1. Problem: With the exception of the Policy Framework's Core 
    Information Model [PFCIM], no high-level network configuration 
    information model exists in the IETF that is implementation-
    independent. Implementation independence yields a model that simply 
    represents network constructs and data, relationships between 
    constructs, and data constraints without regard to a particular data 
    model, protocol, or repository.  
    
  Why is this a problem? Perhaps it just shows that there was no need
  so far to have high-level network configuration information models? ;-)

- Motivation, 2nd problem:

    2. Problem: The Distributed Management Task Force's (DMTF's) CIM 
    (Common Information Model) [CIM] is not the optimal choice for a 
    network information model. CIM provides no mechanism for defining 
    class/model level constraints or for performing secure operations.

  Secure operations? This really sounds like protocol and technology
  and I wonder how this can be a problem in a technology independent
  information model...
 
    CIM also has dictated a naming scheme that cannot be mapped into all 
    implementations. 

  Will NIM solve that problem? Will NIM have a naming scheme that maps
  easily into all implementation? (Perhaps CIM is just too technology
  independent? ;-)

- Motivation, 3rd problem:

    3. Problem: Current modeling representations are insufficient due to 
    a lack of available tools, lack of a textual representation for ease 
    of data exchange, or the expressiveness of the representations.

  Are you talking about CIM here or any modeling representations?

    In 
    addition, for some or all of these representations, there is no 
    constraint language, no suitable class naming conventions, no 
    support for attribute level bindings, no ability to effectively 
    create implementation-specific models, no ability to model 
    associations, etc.  Unfortunately, some are also dependent on 
    proprietary or high-level graphical tools. It will be up to the 
    working group to determine what tool set would be most appropriate 
    in meeting the set of specific requirements listed below. 

  I think lack of available tools is really a poor reason. If a
  modeling language is useful and well defined, then I expect that
  people will write tools for it. A lack of tools either indicates
  that (a) there is no real interest or (b) that the specifications
  are simply too vague to implement something or (c) that people
  prefer to keep the modeling technology itself proprietary rather
  than open.

- Requirement 1:

  What are GUIDs? And why is this a real issue? I mean, even the MIB
  folks managed to deal with this. I am not saying that the scheme
  which was somewhat inherited from ASN.1 is perfect and without
  flaws. But it is simple and works as far as I can tell without
  a need to introduce new mechanisms such as GUIDs.

- Requirement 2:

  Not sure I understand this. Instance naming is very very important
  and you can not simply say we try to avoid it wherever possible. I
  argue that a NIM WG needs to work hard on this issue - otherwise the
  whole thing fails. Without good and _automated_ translation into
  technology-specific data models, the whole NIM idea becomes mostly
  useless due to the costs and potential errors by manual
  translations. I think that a good translation requires to have
  instance naming schemes in place.

- Requirement 3:

  I fully agree with this requirement.

- Requirement 4:

  Not sure you will ever reach the goal to make something like this
  "completely unambiguous".

  Furthermore, I can only warn about over specifications. Constraints
  can be used (with the best intentions) to make definitions so
  inflexible that every little change in the underlying technology
  will break the management interface. I think the MIB folks have had
  some interesting experience with over specifications. It is
  sometimes really more an art to make the right trade-off decision
  here.

- Requirement 5: 
    
  I agree that associations are important.

- Requirement 6:

  I think that the importance of inheritance is often over-estimated.
  In Java, you have inheritance and interfaces, where a class can
  implement a specific interface without being derived from a parent
  class which defines this interface. I think that such a mechanism is
  much more important than inheritance. Even the GDMO object
  inheritance trees I have seen were pretty flat.

- Requirement 7:

  I am not sure I understand what you mean by "user rights" here. Why
  should an information model have to deal with user rights?

- Requirement 8:

  Why is it important to support arrays of base types? If we need arrays,
  why restrict then to base types only?

- Requirement 9:

  I did not understand this.

- Requirement 10:

  I do not really understand why the NIM should dictate how some
  technology ships data around. I find it strange to require in the NIM
  that certain combinations of classes should be treated as a block.

- Requirement 11:

  I agree that one needs methods. That is also the reason for NMRG
  work on adding operations to SMIng. I personally think that one must
  model constructors/deconstructors in order to be useful in practice.

- Requirement 12:

  Not sure if subscriptions belong to the information model. I think
  the information model should only express the kinds of data.

- Requirement 13:

  I would say it even stronger: It is necessary to develop automated
  translations into data models that guarantee compliance to the
  underlying information model.

- Requirement 14:

  Not sure how this will work in practice. My experience with agent
  capabilities is that nobody in the commercial world is really
  interested to clearly document what the limitations are.

- Requirement 15:

  I do agree that we need compound data types. I do not really agree
  with the example - but that is a minor point.

- Requirement 16:

  I am not sure I understand the requirement.

- Requirement 17:

  Very important.

- Glossary

  I am not really convinced by some of the explanations:

  A data type is only a mechanism for identifying the binary
  representation of atomic attributes?

  A NIM class is synonymous with an attribute??

  I still argue that it is impossible to define an information model
  which is not perverted by the shortcomings of a specific
  implementation. Once you agree on a language for NIM, you have your
  own set of "perverted by the shortcomings".


/js

-- 
Juergen Schoenwaelder      Technical University Braunschweig
<schoenw@ibr.cs.tu-bs.de>  Dept. Operating Systems & Computer Networks
Phone: +49 531 391 3289    Bueltenweg 74/75, 38106 Braunschweig, Germany
Fax:   +49 531 391 5936    <URL:http://www.ibr.cs.tu-bs.de/~schoenw/>