AI & AIOps

From Scripts to Systems: Why The AI Agent Conversation Feels Different This Time

William Collins

Director of Technical Evangelism ‐ Itential

From Scripts to Systems: Why The AI Agent Conversation Feels Different This Time

From Scripts to Systems: Why The AI Agent Conversation Feels Different This Time

January 28, 2026
William Collins

Director of Technical Evangelism ‐ Itential

From Scripts to Systems: Why The AI Agent Conversation Feels Different This Time

If you’ve spent any time in network operations over the last decade, you’ve seen the pattern.

First it was: “How do we stop doing this one CLI thing by hand?”
Then it was: “Okay, how do we automate this at scale?”
Then it turned into: “Now we need orchestration, because this isn’t one task anymore.”

And just when most teams felt like they had a handle on that, the conversation shifted again.

Now it’s: “How do I build systems that can reason about what needs to happen… and then do it safely?”

That’s the new question. And it’s not just bigger, it’s moving faster.

Honestly, that’s what’s throwing people off right now. It’s not just the change. It’s the pace.

LinkedIn is a Dumpster Fire & Nobody Agrees What an Agent Is

Let’s start with the most basic issue.

Everyone is talking about “AI agents.”
Almost nobody means the same thing.

Depending on what corner of the internet you’re in, an “agent” is either:

  • A magical AI employee that replaces half your org chart, or
  • A fancy wrapper around an API call, or
  • Some vague “autonomous thing” that’s supposed to do… something

And if you’re an enterprise operator trying to keep the lights on, this is where the confusion turns into fatigue.

Because in network operations, words matter. Scope matters. Outcomes matter. And most importantly: blast radius matters.

That’s why in the podcast, I wanted to get super plain about it. What are we even talking about?

Chris put it simply:

An agent is a goal-oriented system that uses the reasoning capability of an LLM to accomplish an outcome. Not a chatbot. Not a UI gimmick. Something that can actually move work forward.

But here’s the part people miss: it’s not one big all-knowing agent that runs the whole company.

It’s more like… thousands of smaller agents doing very specific jobs.

Which brings us to the real question.

If Agents Scale, Orchestration Becomes The Whole Game

Eyvonne made a comparison that stuck with me.

If you think about the rise of microservices and containers, it wasn’t just that we got better at packaging apps. It’s that we eventually needed Kubernetes to manage the chaos we created.

Agents feel like we’re headed down the same path.

Because if you have hundreds or thousands of agents running around doing things, you do not want to manage that manually. And you definitely don’t want it to be some “hope and pray” situation.

You need orchestration. You need guardrails. You need visibility into what’s happening and why.

Otherwise you don’t have an agent strategy. You have a science experiment.

This Isn’t About Autopilot, It’s About “Can We Make Better Decisions Faster?”

One of the biggest misconceptions I keep hearing is that AI agents equal autonomy, and autonomy equals “the machines take the wheel.”

That’s a fun headline, but it’s not what most people are actually doing.

Most enterprises aren’t sprinting toward full autonomy. They’re dipping a toe in the water. They’re starting with read-only and advisory use cases. They’re figuring out where AI fits without breaking change control, compliance, and basic reality.

Chris said something that I think is the right frame:

We overestimate these waves in the short term and underestimate them in the long term.

That’s exactly it.

The first wave is not “self-driving networks.”
The first wave is “less toil, better context, fewer dumb manual steps.”

And if you’re a network operator, that’s not a small thing.

The Happy Path Isn’t The Problem – The Long Tail Is Where Automation Goes to Die

Here’s the part that hit hardest for me.

If you’ve built automation pipelines in the real world, you know how this goes.

You start with a clean workflow:

  • Step 1
  • Step 2
  • Step 3

Everything works. You feel great.

Then reality shows up:

  • API rate limits
  • weird device behavior
  • maintenance windows
  • partial failures
  • some ancient box nobody is allowed to reboot
  • somebody changed something “just this once”

And suddenly your nice automation becomes a brittle monster held together by conditionals and prayer.

Chris nailed it: the happy path can be deterministic. It should be. But the long tail of edge cases is where reasoning actually helps.

That’s the moment where it clicked for me.

Agents aren’t here to replace deterministic automation. They’re here to reduce the technical debt we accumulate trying to code every exception into our systems.

Brownfield Is Still The Main Character

Let’s talk about the elephant in the room: brownfield.

Everyone loves a clean demo in a brand-new lab environment. Cool.

But the enterprise world? It’s messy.

You have:

  • cloud + on-prem
  • old stuff + new stuff
  • vendor sprawl
  • policy sprawl
  • tribal knowledge
  • “we can’t touch that box because reasons”

We’ve all seen environments where half the network is modern and automated, and the other half is powered by spreadsheets, duct tape, and one engineer named Steve who has not taken vacation since 2017.

So the question becomes: can agents help here?

Eyvonne made a great point: LLMs may be able to deal with variation better than prescriptive automation ever could, because they can interpret and adapt.

That’s a big deal. If it’s true, it means brownfield doesn’t have to be a hard stop. It can be an on-ramp.

But it still has to be done safely, with constraints and guardrails.

What I Actually Think Happens Next

Here’s my take: the next year is going to be weird.

The technology is moving insanely fast, but the enterprise world doesn’t move at that pace. There’s a lag. There’s always a lag.

Some teams are already running agents in production. Others are still rolling out SD-WAN five years after the decision was made. Others are managing IPAM in a spreadsheet and aren’t even embarrassed about it.

That spread is real, and it’s why you’re going to see wildly different outcomes in 2026.

But the direction is clear.

We’re moving from automating tasks… to orchestrating workflows… to building systems that can reason through complexity.

And if you’re in infrastructure, this shift will matter.

Because our job is not to chase hype.
Our job is to deliver change safely, at scale, in imperfect environments.

Agents might help us do that.

But only if we treat them like what they are: tools that need orchestration, governance, and operational reality built in from day one.

Ready to go deeper?

If you want the full conversation, check out the episode here or on-demand below. We break down what an agent actually is, why the shift from task automation to reasoning systems matters, and how teams can approach this in a way that’s safe, scalable, and grounded in real enterprise ops.


William Collins

Director of Technical Evangelism ‐ Itential

William Collins is a strategic thinker and a catalyst for innovation, adept at navigating the complexities of both startups and large enterprises. With a career centered on scalable infrastructure design, he serves as Itential’s Director of Technical Evangelism. Here, he leads the charge in network automation, leveraging his deep roots in cloud architecture and network engineering. William hosts The Cloud Gambit Podcast, diving into cloud computing strategy, markets, and emerging trends with industry experts. Outside of transforming networks, you can find him enjoying time with family, playing ice hockey, and strumming guitar.

More from William Collins