We at Itential tout modeling all the time. We use it in our integrations with the Cisco Network Service Orchestrator (NSO) when working with its YANG models and Device Templates. We often tout it as the answer to the limited scalability offered by a scripting approach to automation, but we rarely talk a lot about the reality of where we are now in the industry when it comes to modeling.
A recent blog I read talks about this in a thought provoking way that deserves some consideration.
Everybody seems to agree that modeling is the future. I do think you will find a variance in agreement on whether it is useful right now, and an even bigger variance on just how we should model.
In the referenced blog, Tom Nolle puts forth two extremes of modeling:
- A “light touch” approach that connects functionally complete components into a provisioned service in much the same way we connect things today, with the owner of each element “owning” the implementation of that component…basically a continuation of the old school OSS/BSS/TMF point of view.
- And, a “complete composition” approach where the service is modeled as a single entity. This is the foundation of an “intent-based” approach to service management, by abstracting the granular details of the underlying resources and focusing on the service itself…not the resources supporting it.
I like to talk about reality versus hype when it comes to things like this. In a perfect world, we all move to “complete composition” and build intent-based services where an order becomes, “I need a 1GB connection from my Houston office to my Atlanta office that is secure,” and that intent is translated into the technical bits by the modeling that was created. We don’t live in a perfect world, but that doesn’t mean we need to try and force-fit network management into the old school OSS way of thinking either. The reality is somewhere between the two, of course.
The bottom line is that we need to move away from a resource management approach. The old school way of provisioning and monitoring individual devices and then piecing together the customers and services impacted by an event involving any one (or group of) those devices has resulted in an almost unmanageable mess with our current OSS systems. It is expensive, it is hard to maintain, and the approach has failed.
We must move to a service-based management approach where an event impacting a service causes a dynamic change in that service to better manage its lifecycle. This could mean rerouting of that service around problem resources, or even moving virtual components from one datacenter to another. All of this is possible with the “complete composition” approach.
This all comes back to the last point of Nolles’ blog post. How does this get done? In the old school approach each vendor created their own way of doing things. This is largely why the service provider networks and tools involved grew out of control. You needed specific tools to manage specific vendors, and multi-vendor networks were/are a nightmare to manage. Even today, many providers still stick to single vendors for specific segments of their network when possible.
The trap is how do we implement a standard way of modeling things, that works for any vendor and any service, while not locking down the approach to a point where we limit our flexibility? If history is an indicator then a vendor won’t be the one to do it, because they all have their own bottom line in mind (as they should) and tend to implement things to “protect their base” instead of things that become widely adopted across the industry. There are industry working groups and standards bodies looking at this, and hopefully they drive vendors to adopt a more standard approach. The next few years should be exciting in this space.