[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: NIM requirements/conventions (was Re: Methods in the NIM requirements)


Comments inline:

> > >   1) you still didn't tell me why an
> > >      information model needs to be
> > >      concerned with APIs. Isn't that
> > >      a problem in building the
> > >      data model mapping?
> >
> > I am trying to figure out where we are miscommunicating.
> In the requirements
> > spec, the text says:
> >
> > "This document only describes the requirements for the
> definition of
> > common attributes, mechanisms and conventions for their
> organization
> > into reusable data structures, and mechanisms for
> representing the
> > attribute's relationships to other attributes and
> structures.
> So my first point of disagreement is that this statement is
> once
> again focusing on attributes (e.g., "representing the
> attribute's
> relationships..."). An information model is much richer than
> that,
> and must focus on the modeling of objects. This is not
> limited to
> attributes, and in fact attributes may not be the primary
> focus of
> objects being modeled.
When I copied the quote, I noticed this as well :)

> > The overall goal is to allow these attributes to be
> equally applicable
> > to protocols, programmatic interfaces, and repositories."
> This I completely disagree with. This statement merges the
> development
> of an information model with a mapping to one or more data
> models. These
> are fundamentally different efforts, and combining them into
> one is not
> a good idea.
> In other words, if the group is to focus on an information
> model, then
> it should focus on the information model independent of any
> mapping
> issues. Protocols and data structures are a function of
> which data
> store is chosen for the mapping, and APIs are a means to
> store and
> retrieve repository-specific information (e.g., the LDAP C
> APIs don't
> work very well against an RDBMS ;-) ).
I think we are crossing wires on two specific issues.

1. In many of your messages you refer to repositories. I think repositories
are interesting. However, for the IETF, the interfaces to embedded systems
are much more interesting than the interfaces to the repository. I can use
either a repository or a direct interface to the system to manage the system
as I described in my previous message on paradigms. But, to manage a system,
a repository may or may not be needed depending on the paradigm. A
repository is one potential mapping. But, as you say, a repository and it's
interface are but one application and LDAP is but one mapping.

2. For me, the primary value of the modeling activity is to define a
consistent set of management interfaces to embedded systems. This is what
the COPS folks, the SNMPCONF folks and even the traditional SNMP folks need
right now. What I need to model are all things I wish to expose. Now, there
is this fussy line between the term behavior and interface because when I
change the configuration of a system, I am affecting the behavior. However,
if I wanted to model the system behavior, I might include interfaces that
are not pure management interfaces. For example, the interface between
routing and RSVP for detecting route flaps is an interface but not a
management interface. In a pure system model, I would include this
interface. In a management model, I would ignore it becuase none of my
management systems make use of it.

> > My view here is that the model is not describing the
> system per se. Rather,
> > it is describing the interfaces for interacting with
> (determining the status
> > and changing the the operational characteristics of) the
> system.
> Again, I completely disagree. The model should represent the
> system, and
> mappings should describe how to manipulate the objects
> representing the
> system and bind the model to application-specific uses.
See my comments above.

> > Now, there
> > are a number of ways to interact with a system. You are
> correct in asserting
> > that the process of converting an information model to a
> programmatic
> > interface (API) is a mapping. The fact is that methods map
> very nicely to
> > programmatic interfaces. The issue is not with the
> mapping. Rather the issue
> > is with the range of paradigms that we would like to
> support with this
> > model. If we bias the model strongly towards deployment
> through APIs, we
> > would be inclined to use methods pervasively because
> programmatic interfaces
> > are defined in terms of function calls. If we were to bias
> the model
> > strongly towards repository oriented applications, we
> would strongly favor
> > an attribute model. If we considered the management
> interfaces provided
> > through protocols such as SNMP, we would again be inclined
> to lean towards
> > an attribute centric model.
> We disagree here because the information model should use
> whatever tools
> it needs to properly describe a managed object and its
> interaction with
> the managed environment. This is why you can't make
> statements about when
> to use methods - it's really a function of what you are
> modeling.
What I said was that this bias tends to be the inclination of the modelers.
If you consider the DMTF networks and policy models, both are very attribute
centric. I would be hard pressed to name any methods in either model (except
the derived ones). By coincidence, the majority of modelers in both
activities had a decidedly repository-oriented set of implementations in
mind. In contrast, the DMTF systems and devices model has many methods in
them and coincidentally is strongly influenced by folks who are defining
programmatic interfaces to their systems. Now, this is not necessarily
intentional. It may well be an unconscious bias.

This anicdotal experience suggests that the proper modeling strategy has
more to do with who is doing the modeling (as opposed to the one true way).
The debates over the best way to model a specific feature and the diversity
of opinions on how good a given model is reinforce this perspective. 

Lastly, if you can't specify when to use methods, you not only open yourself
up to inconsistencies in the model but also inconsistencies in the mappings.
I happen to agree with Juergen that we must have mapping guidelines. In
addition, I don't think you can have mapping guidelines without modeling

> Saying that you aren't going to use methods because it makes
> the model
> map more easily to a protocol or data store will ultimately
> ruin the
> model, because it will ensure that the model is not
> generally applicable.
> It is the job of the mapping to map to limitations or extra
> features of
> a data store and its protocol(s).

To me this is not a question of the value of methods or the complexity of
the mappings. In the past, I have relied on methods frequently. The real
question is the level of granularity with which an interface is specified.
If it is specified as a system with very granular descriptions of what
should happen, in what order, what all the potential results are, etc., you
are far closer to defining an implementation than an interface. The question
is: How black is the box (to what degree should an interface model specify
an implementation)? This has always been a problem with standardizing
management interfaces. There is an inherent tradeoff between making the
management system more explicit and forcing a specific implementation
(stifling diversity and innovation). I could safely argue that the value of
methods is in the explicit specification of an interface, thereby resulting
in a more precise implementation of the method (irrespective of the mapping
of the interface to a given protocol or API). With the more explicit
interface definition comes the inherent mapping issues. I described one such
mapping issue with my description of inconsistent transactional semantics in
different management paradigms.
> > Historically, we have tried to be implementation neutral
> in CIM. However,
> > time after time specific tradeoffs have been made in the
> model to avoid
> > problems with specific mappings. I can't count the number
> of times that
> > someone wanted to make a change to CIM because it was
> problematic for LDAP.
> > Don't get me wrong. I aggree that we should be sensative
> to mapping issues.
> > In fact, what the requirements doc is saying is that in
> NIM we should be
> > sensative multiple mappings.
> And so far we have kept CIM implementation neutral. This is,
> in fact, why
> the LDAP Mapping WG was formed - to split off as a separate
> process the
> mapping of CIM information into a directory.
> Bottom line is that the model should not be biased towards
> any one
> specific protocol or repository.

I provided the quote from the document to make this very point. Here, you
seem to be agreeing with its intent. We may be closer than I thought.
> > >   2) I do understand push and pull. I
> > >      don't understand what they have
> > >      to do with the info model.
> >
> > LDAP, COPS, and SNMP all have various strengths and
> weaknesses. Some
> > weaknesses are a result of the protocol and can be
> addressed with protocol
> > enhancements. Other weaknesses are a result of the
> paradigm under which the
> > protocols operate. My earlier reference to transactional
> semantics is a
> > specific example of weaknesses that at least in part are a
> result of the
> > paradigm. In some cases contradictions between the info
> model and the
> > weakness of the paradigm prevent a viable mapping.
> I don't understand this at all. Transactional semantics
> isn't in the
> information model because the ability to support
> transactional and
> referential integrity is repository and protocol-specific.
> This is the
> job of a mapping. If a repository, like a directory, can't
> support
> these features, then it is the job of the mapping to support
> it in
> some other way (e.g., by using middleware in conjunction
> with the
> directory to support the desired semantics).
This comment is contradictory with earlier statements you made. There are
clearly elements that you want to model that are repository and
protocol-specific. Events and User Access rights are both in the
requirements doc. I believe you have endorsed including event notification
in the information model. Yet, in your last comment you argue that
transactional semantics are repository and protocol-specific and therefore
should not be in the model. Your previous statement also argues for a
strategy that restricts the model to those features that are commonly
available in a cross-section of protocols (something you argue against later
in your last message).

I also get the distinct impression that Andrea disagrees with you. One of
Andrea's arguements for methods seems to be to encapsulate a behavior into a
transaction (the reference to "go"):

"<Andrea> I do not understand the argument of having ONLY attributes in a
class definition and then hoping that some behavior happens.  Is the goal to
define an "object's" attributes and a "do something" bit?  IE, when I set
the attributes and then hit "go", the right stuff happens?  This seems a
formula for disaster and proprietary/unpredictable behavior."

If transactional semantics are not part of the model, the benefits of
methods is greatly deminished. On the other hand, if they are part of the
model, then my earlier example (and other examples I have not raised yet)
raise some serious mapping issues that may undermine the overall benefits of
the information model.

> > So we have to ask ourselves a question. Do we assume that
> the protocols (and
> > paradigms) will be altered to fit the ideal info model or
> do we adjust the
> > info model to optimize the mappings to the broadest
> cross-section of
> > paradigms and protocols/repositories/programmatic
> interfaces. If we claim
> > the former, then the additional requirement I am
> suggesting is uninteresting
> > because we will assume a specific paradigm for the model
> and change the
> > protocols or restrict the set of protocols to those that
> can conform to the
> > paradigm defined in the model. If we agree on the latter,
> then we need some
> > guidelines for modeling that take the issues associated
> with the various
> > paradigms into consideration.
> We do neither.
> The first approach is not doable because one can't
> instantiate an
> information model. The information model represents objects
> and their
> inter-relationships, not protocols and paradigms.
> The second approach is just as bad because:
>   1) protocols, repositories and APIs deine mappings, not
> info models
>   2) by having a mapping drive an information model, the
> type of data
>      and the relationships between different objects will be
> dictated
>      not by the system that you are modeling, but by the
> protocol or
>      repository that you are using. (I see no relationship
> between
>      APIs and the info model).
> What we should do is agree on what it is that we want to
> model, and model.
> We can then investigate a set of mappings to specific data
> models. But
> the mappings come after the info model is done, not before.
I either completely disagree or completely misunderstand what you are
saying. There are many examples of portions of existing models that are
impossible to map. I have cited a number of them. The fact is that different
protocols have different capabilities. If they all had the same capabilities
we wouldn't have so many. Hence the features and limitations of a protocol
determine the subset of viable mappings in the model that are available. If
there is an event/notification mechanism in the model and the model is
mapped to LDAP, than that portion of the model is unusable as long as LDAP
does not support event notification. Again, you have two choices:
1. limit the model to those mechansisms that are generally available and
easily (algorithmically) mapped and expand the model when the protocols are
2. construct a model that is unconcerned with specific mappings recognizing
that portions of the model will be unusable in certain instances until the
protocol or mapping has evolved.