Automation Strategy

Dispelling 3 Common Network Automation Myths

Rich Martin

Director of Technical Marketing ‐ Itential

Dispelling 3 Common Network Automation Myths
Share this:
Posted on October 3, 2023

As with any journey we embark on, before we get started, we often think about what we need to begin the journey, what we may need along the way, and how long it will take us. When it comes to the network automation journey, it’s really no different.

Before network engineers even begin the automation process, they tend to start with preconceived notions that oftentimes, if acted upon, can hinder the process. To prevent that from happening, it’s important to identify and dispel a few common misconceptions currently out there and discuss how networking teams can overcome them. So, let’s address the three most common network automation myths.


Myth #1: A Single Source of Truth & Standardized Data Are Prerequisites for Meaningful Automation

Most network engineers simply don’t trust the systems that store network data because of the many failed attempts they’ve experienced trying to maintain accurate information. Why do these systems lack accurate data? Simply put, the spreadsheets and databases tracking the data are “offline,” which means they are “in” the configuration change process but “outside” the process of requiring updates after all changes.

Secondly, the updating processes are human-centric and oftentimes managed by inexperienced engineers during maintenance windows — which typically fall between the hours of 12 AM – 5 AM — or they’re the result of emergency fixes performed on the fly without timely documentation. This lack of timely data updates erodes confidence that these systems are accurate.

This is where the role of DDI platforms comes in. DDI is a unified solution that combines three core networking elements — domain name system (DNS), dynamic host configuration protocol (DHCP), and IP address management (IPAM). These platforms serve as reservation and tracking systems for IP addresses and DNS records which must be unique and accurate for the network to behave properly. Despite this, what can still happen is the DDI data and the actual network configurations can still get out of sync, providing incorrect DDI data.

Some tools were built to put automation on top of a specific Source of Truth (SoT), tightly coupling automation with SoT data within that database. However, there are other sources of truth within the network that the automation code doesn’t operate on or integrate with, leading to incomplete or incorrect data and automation that is limited to automating tasks instead of an entire process. I believe the real SoT is the configuration of the network itself — not an offline copy of the system data that may or may not reflect updated information.

Source of Truth is important to the automation journey. But trying to have a single source of truth can quickly lead to inaccuracy. So how do you decide when to apply SoT and when not to apply it?

First, it’s always a good idea to apply a source of truth for parts of the network that aren’t programmable, for example, port assignments.

Second, some programmable network infrastructure is the SoT, for example, anything in the cloud and SD-WAN. Amazon Web Services (AWS) is the source of truth for AWS. An SD-WAN controller is the source of truth for SD-WAN. These systems are programmable and always accurate which means you don’t need an offline copy. Copies are the source of discrepancies which drive error in automation. Multiple sources of truth and “fresh” data will enable better automation.

Learn more about why you need a Source of Trust, not for network automation in this blog.


Myth #2: Network Scripts as a Strategy

When network engineers identify activities they want to automate, the natural path is to turn to network “scripting,” especially since many don’t consider themselves developers. This means leveraging simple tools to automate discrete, individual network changes. Two tools have become the go-to for network scripting — Python and Ansible.

Python, which has been around since 2010, has become the default programming language for network automation and has many network-friendly libraries. Ansible has also become a crowd favorite for two reasons: first, it has simplified/limited the functionality towards automation and leverages YAML as a description language for automation. Secondly, it has broad support for command line interfaces (CLIs) for most network vendors.

However, regardless of the tooling you choose, the scripts approach has limitations. When network engineers get started with automation, it’s usually to save time and/or effort due to a growing backlog of repetitive tasks. Tools like Python and Ansible are usually used to turn something like a repetitive port turn-up into a single command, helping save your fingers and your sanity.

But over time, as your view widens, relying on scripts alone makes it difficult to scale the impact of your automation work. Ansible tries to be simpler than writing code, but this comes at the expense of some serious limitations with respect to integration and scale. For example, if you’re stringing multiple playbooks together and exchanging and manipulating data between them, custom code is required, which brings you back to learning Python. But on the flip side, trying to build end-to-end process orchestrations with Python alone will require much greater investment in development time, new skills, and code management than most network engineers are aiming for.

That’s where the scripts-only strategy gets much more complex: as you evolve beyond task-based scripts, the necessity of integrations, access control, authorization, and auditing of your scripts becomes apparent. When you rely solely on scripts and take a DIY approach, you might wake up one day to find out that in addition to a network engineer, you’re also a developer, and system administrator: maintaining a code base, learning more tools and skills, installing and updating applications to manage, enhance, and securely share your scripts.

So, what starts out as a simple way to save time and effort can sometimes balloon out of control. Instead of trying to custom build key features around your network automation scripts by writing code or using other applications, you could consider a platform — one that can take your scripts and enable these features automatically. This approach helps you get the time-saving benefits that automation promises, allowing you to focus on writing more and better scripts to automate core network changes instead of getting stuck on surrounding (but needed) features.

For a great discussion about how to evolve and make progress past individual scripting, watch this video from NetDevOps Days London.


Myth #3: Mapping & Modeling the Network Before Automating: If I Can’t See It, I Can’t Automate It?

Oftentimes, network engineers believe modeling and/or mapping the entire network is a prerequisite before beginning the automation journey. However, this isn’t a feasible plan, especially when we’re talking about larger networks with many devices.

Why isn’t mapping the network feasible?

What many don’t realize is that the process of completely mapping an entire network can take several months. When mapping the network, changes are constant, resulting in a process that never really ends before automation can begin. Additionally, requiring modeling of different network devices as a prerequisite to automation comes with some severe downsides.

First, your network automation software vendor must support a particular network vendor, model, and operating system version in their application before any automation can be done. So right from the start, network teams are faced with only being allowed to buy software based on what it’s able to support, or buying something that hasn’t been modeled and simply going without automation until the vendor supports it.

Also, network vendors who use modeling as the basis for automation must create models for every CLI command and feature supported in the OS. This requires time and resources which forces the vendors who model like this to support a very limited number of vendors/models/operating systems.

While mapping and modeling are important to the automation journey, they should not be viewed as prerequisites, simply because doing so can waste too much time. Rather, both mapping and modeling should be used to support automation.

At the end of the day, we see more enterprises embracing network automation because of the efficiencies it delivers. But if you’re going to automate your infrastructure, your automation solution will need to gather authoritative information using multiple sources of truth.

With today’s programmable networks, relying on a single source of truth is based on a flawed assumption that we can always have a synchronized database. With network automation, organizations can adopt a distributed source of truth solution by enabling the multiple systems of record, and their collective data, to act as the source of truth.

To learn more about how you can overcome these myths and start to see some real progress with your network automation efforts, check out this on-demand session from NetDevOps Days London.

This article was originally published on APM Digest.

Evolving from Individual Automation to Team Automation @ NetDevOps Days London 2023
Rich Martin

Director of Technical Marketing ‐ Itential

Rich Martin is the Director of Technical Marketing at Itential. Previously, Rich has worked at several networking vendors as a both a Pre-Sales Systems Engineer and Systems Engineering Manager but started his career with a background in software development and Linux. He has a passion for automation in the networking domain, and at Itential he helps networking teams to get started quickly and move forward successfully on their network automation journey.

More from Rich Martin