The 4 Core Principles of a Successful Network Automation Strategy
“A man who does not plan long ahead will find trouble at his door.” – Confucius
As the network evolves into a more dynamic environment due to the introduction of new concepts such as virtualization and software-defined everything, it is becoming imperative that operators automate as much as possible. Whether it be software-defined WAN, data center, or networking in general, it is clear that the age of network programmability is upon us. So, what does this mean for network automation? How does it evolve to support new technologies while still addressing integration into existing legacy ecosystems? Network automation can provide a high return on investment if a proper strategy is put in place, but how?
A successful strategy requires using the right tool for the right job rather than leveraging incomplete solutions that are force-fit into achieving an outcome it simply was not built to do. Also, the approach should involve determining the right skill set for the job and the right infrastructure and tooling to support it. Let’s explore the four core principles to achieve the right automation strategy and avoid ineffective automation efforts.
Stop swivel chairing, start integrating
Most network operators begin their automation journey focused on very specific activities like manual execution of CLI commands on network devices. Operation teams begin with copying and pasting commands from a file and eventually move to scripts written in languages like Perl, Python, or Bash, claiming automation success. The problem with this approach is that it undermines the entire end-to-end process that entails much more than typing CLI commands on a device. Depending on the type of request, there are other functions involved, such as assigning IP addresses, reserving/assigning physical and logical ports, completing change management processes, and other security-related actions. These additional tasks that are part of a larger procedure involve several team members logging into several disparate systems in order to complete the incoming request.
This highlights the first core principle of an effective automation strategy – seamless integration. Each of the activities mentioned above typically take place within systems that expose their capabilities via APIs that are consumable by other external technologies. Such programmable interfaces allow for automation of the entire process, not just the manual or scripted entry of CLI commands. For the average network maintenance activity, command execution via CLI only represents around 10-20% of the overall effort, so it quickly becomes obvious why ignoring the other 80% or more leads to less than acceptable ROI of the automation effort. Integration is the solution, but it is only one of the foundational elements required for a successful automation strategy.
Understand the sources of truth
The other side of the equation is the data itself. Traditionally, network management tools have required copies of data to be housed within their databases. The vendors providing these tools have adopted such architecture in an effort to implant themselves as a necessary component in the network ecosystem, making it harder for operators to adopt and implement alternatives. As operators utilize several different network management tools to drive configuration changes, data gets duplicated across multiple systems without any synchronization between them, resulting in data distribution forcing network engineers to rely on multiple different sources of truth.
This highlights core principle number two – having an ultimate source of truth. It is important for every piece of data to have an unquestioned source of truth. This means every piece of data should have a reliable and unified source. Preferably, data should be distributed across multiple systems, with each system owning the data it is most suitable for. The ideal automation platform should use its integration to these distributed data sources and provide a federated set of data for use in automation activities.
Automation: Take your hands off of my network!
Now that we have an integrated and federated platform that knows all about data and capabilities of the tools managing the network, it is essential we leverage it to introduce effective network automation. As a starting point, the platform should provide a low-code environment that exposes these datasets and network management actions to allow easy creation of new automations. An easy to use platform with built-in network intelligence allows for rapid creation of automations with little to no coding skills. This is extremely important because most operators rarely have network engineers with software development skills, or developers with a plethora of networking knowledge. It is very difficult to find such unicorns; hence it is critical that organizations choose a comprehensive automation solution that would allow for cross-functional team members such as network engineers, operation engineers and DevOps teams to participate and drive automations.
This is the third core principle for an effective automation strategy – an automation platform. The platform must integrate all tools, federate data and system capabilities, and allow for the combination of these artifacts to be used in an automation process. That process should include change request management, IP Address management, physical and logical resource reservation/assignment, command execution on network devices, and status notification via different venues. All activities mentioned above should occur in an automated zero-touch fashion with manual intervention only during process or system fallouts. Finally, the platform should enable different network-facing teams to collaborate via a simple and intuitive interface for creating network automations.
Reusability: Bet you can’t do that again
Finally, core principle number four is that all of the components involved, from data to capabilities, to the automation processes created, must be reusable and easily maintainable. The ideal automation platform will provide version control of the automation processes it houses, along with a low-code environment to maintain and enhance existing automations. Such feature sets will allow for automations to be repurposed as new technologies are introduced within the networking ecosystem.
Everyone is familiar with the popular adage attributed to Benjamin Franklin that says, “Failing to plan is planning to fail.” It certainly rings true with respect to network automation. With a commitment to advance planning that addresses seamless integration with modern and legacy systems, federation of data that allows for distributed sources of truth, and flexible, reusable automations that can be easily modified in a low code environment, an automated network is possible in a timeframe that exceeds expectations and delivers a return on investment.
Article originally published on IT Ops Times.