At AutoCon 4, Itential’s Chief Architect, Peter Sprygada, shares a practical approach to using AI in network and infrastructure operations – without giving up control. He introduces a layered operating model that combines deterministic execution with AI reasoning, and previews Itential’s new FlowAI framework built to support agentic operations safely.
Whether you’re cautiously testing AI on the side or trying to figure out how it fits into real-world operations, this session focuses on what works, what doesn’t, and how to introduce AI without breaking what you already trust.
In this session, you’ll learn:
- How a layered AI operating model brings together infrastructure instrumentation, deterministic execution, and AI reasoning.
- Why AI in infrastructure must prioritize security, governance, and auditability from the start.
- The role of MCP servers and gateways in safely extending AI capabilities.
- How human-in-the-loop evolves to human-on-the-loop – and eventually autonomous operations.
- Why simpler, more focused workflows are essential for agentic systems.
- How Itential’s FlowAI framework supports AI-enabled infrastructure operations safely at scale.
See how to put AI to work in infrastructure – without handing over the keys.
![]()
The future of network operations is agentic – but it starts with instrumentation, determinism, and humans in the loop.
Peter Sprygada, Chief Architect
Video Notes
00:00 AI & Network Operations
00:45 AI Is Just Another Tool
02:00 Why AI in Infrastructure Is Different
03:40 A Layered Operating Model
04:45 Instrumentation & Deterministic Execution
05:55 Adding AI Reasoning Safely
07:20 From Human-in-the-Loop to Autonomy
08:30 Introducing the FlowAI FrameworkView Transcript
Peter Sprygada • 00:02
This is the 4th wow. It’s hard to believe this is Autocon 4. You know, I made I made this comment on LinkedIn the other day that I feel like Autocon Zero was like last week. It’s just amazing. But uh looking out at this room, it’s really an impressive thing to see. But that’s not what I’m here to talk about. I am here to talk about everybody’s favorite tech or everyone’s favorite topic, AI.
Peter Sprygada • 00:24
Yay. So, you know, I talked at Autocon 3 when we launched our MCP server. You know, I talked about you know my personal journey with AI, and I talked about how, you know, I’m I’m a network engineer, right? And and I always that person was like, AI is never touching my network and never touching my infrastructure. And I’m sure that there are those here today that still have that feeling. But I do believe there is a place for AI. I’m gonna talk a little bit about how we can actually leverage AI and what an operating model might look like with um AI for how we manage network and infrastructure.
Peter Sprygada • 00:58
But to start an AI conversation, let’s start here. And you know, Dinesh made a comment and I took it to heart in his presentation about, you know, we’re all network engineers at the end of the day. And we all have to lean on those skills. And one of the things that we recognize as network engineers is that we’ve got a lot of tools at our disposal. Right? And these can be protocol constructs, these can be scripts, this can be software, this can be whatever they are. We have tons of tools at our disposal.
Peter Sprygada • 01:29
And it’s always important for us never forget to use the right tool for the right job. Right? AI, while it is a disruptive technology, it’s revolutionary technology, at the end of the day, it’s still just a tool. And it’s, I have to check myself very often to make sure I don’t get caught up in the AI hype. Right? That AI doesn’t solve everything for us. As a matter of fact, I’d assert that there’s a lot of infrastructure that will never be AI enabled for a lot of good reasons.
Peter Sprygada • 02:00
But if we start to treat it like a tool and we start to think about it in these terms, we can start to see an operating model start to evolve, and we can start to see how we can start to leverage it in infrastructure. But 1st we gotta recognize that AI in the infrastructure is fundamentally a little bit different than you know, say AI for applications or however else we might be using AI in our life. First and foremost is the fact that it has to be secure. I don’t think anyone’s gonna debate this. If I can’t deliver AI technology against infrastructure in a secure way, I’m in trouble. I once had a customer many, many years ago told me as we were kind of wrapping up, um, you know, getting ready to do a deployment, and and it was a senior level uh director who came in the room and said, I’ve got one requirement for you, one requirement only. It’s the only requirement I’m gonna put on the table.
Peter Sprygada • 02:52
He said, never put me on the front page of the news. And I took that very much to heart, and I think that’s you know true more so today than it’s ever been. Next, we have to be able to govern, and we have to be able to account for what AI may be doing in our infrastructure. I mean, this is true non-AI, but it’s also very true in the AI space. We really have to make sure that we’re thinking through governance, we’re thinking through accountability, we’re thinking it through traceability of how AI may or may not be working with our infrastructure. And then last but not least, we have to recognize that the stuff we work with every single day was made, is not gonna have native AI interfaces. As much as I would love to believe that name your favorite router vendor, say favorite switch vendor is gonna come out with an NLP chat interface instead of a CLI.
Peter Sprygada • 03:41
I don’t see that happening. So we have to recognize that going forward. So we put all that together, we start to see this operating model that starts to evolve. And it really starts at the bottom layer or the instrumentation layer. Right? This is what we’ve been doing all along. We’ve been doing this for a long time.
Peter Sprygada • 04:00
Right? It’s just giving it a very easy generic, I don’t know what easy, but certainly a generic name. Infrastructure instrumentation. Rolls right off the tongue. It’s great. I love presenting this. So, but this is what we’ve been doing.
Peter Sprygada • 04:10
And whether it’s scripts or playbooks or CLI communists, whatever it is, we need to continue to see that layer evolve. Oops, didn’t mean to do that. Um the next layer that we want to talk about is that deterministic execution layer. And this is what orchestration is all about, right? It’s all about building a layer that can do things the same way every single time. It takes input in, it processes, it puts output out, and it does it exactly how I wanted to do it, in the order I wanted to do it in. This is what a lot of us are doing this you know today.
Peter Sprygada • 04:45
But now we can add the 3rd layer to it, the AI reasoning layer, where I can take context and marry it with AI, reasoning and ZLMs, if you will, and put those together, and now I can start to reason through what I may want to do to change my infrastructure. So we ultimately our goal is to get to what I believe is the future of agentic operations, and that is we all want to get to, in some form or fashion, an autonomous network operation. Now, we may not, we may believe that that’s not possible, but I think that it is. And it really kind of starts, and we’ve got a pattern for this because we’ve seen it. We did this when automation started. I can remember way back when we started, when I started my automation journey, right? What would we do?
Peter Sprygada • 05:31
We’d build a script or a playbook or whatever, and I generate a config, what would I do? I would go visually check that and I would make sure it was right before I push it to the box. That’s how a lot of us started our automation journey. We didn’t know it, we didn’t call that at the time, but that’s human in the loop. Right? We’ll see the same thing happen with AI, and that’s how we can start to leverage AI to do infrastructure operations. We then can transition to human on the loop, right?
Peter Sprygada • 05:56
Where we’re starting to let AI actually make changes to our infrastructure through this operating model, but we are just monitoring the changes, right? We’re making sure nothing goes wrong, and if it does, we’re there to pull the kill switch. With the ultimate goal, as I said, is let’s just get the humans out of the loop. None of us want to be in this loop, right? That’s what we’re building. This is what automation orchestra is all about. Now, if we add AI reasoning to it, we can start to see it actually do things, and we know that it’s gonna do it.
Peter Sprygada • 06:22
Why? Because we built it to be safe, we built it to be auditable, we built it to be traceable in this new operating model. So, as we start to think about how we introduce AI and we start to leverage these different layers, what is it that we’re actually doing? What is it that we actually build? Well, we’ve got to continue to build the instrumentation layer. That doesn’t change. Don’t care what it is.
Peter Sprygada • 06:42
And you know, I think Dinesh was spot on right with what he was saying. It’s like don’t get hung up here on the what the technology is. Just focus on actually building it. We need to be able to leverage AI systems out there and make use of them in our infrastructure in a very safe and a secure way. We need to leverage things like model contact protocol, right? MCP servers so that we can attach more capabilities to our operating model to ultimately be able to deliver. We need to really start to think through what it means to build agentic systems and to build systems that are designed for agentic use.
Peter Sprygada • 07:21
So when I build workflows, for instance, no matter how I build them, whatever my workflow tool of choice is, right? I want to start thinking about it as doing very discrete things. Let’s bring back that old Unix philosophy of do one thing and do it well. Right? Let’s get rid of these massive workflows that are trying to do everything under the sun. And let’s get back to simplicity. Because with simplicity comes a lot of power, and that’s actually how we gain a lot more control back.
Peter Sprygada • 07:49
So I am here, and I’m very excited about the fact that I am going to spend just a moment and announce that attention, we just introduced today a brand new AI-enabled framework that is all designed around this operating model. We call it the Flow AI framework, and it was purposely built from day one to follow this operating model. We understand specifically that organizations will use AI, but they need to be able to use AI where they use AI. That was a weird statement. Let me try that again. Organizations want to use AI, but they want to use it against the networks by which where it makes sense, the right tool for the right job, right? That’s how we started this whole discussion.
Peter Sprygada • 08:29
So the FlowAI framework focuses on the instrumentation layer with MCP gateway functionality so we can plug in MCP servers, any one of the 643 billion MCP servers that are available these days. We can continue to do deterministic layer work, leveraging MCP tools if we want, but we stay in control at this point, and then we add the reasoning layer through the introduction of FlowAI agent builder with flow agents. And these agents now have the ability to go out and actually change infrastructure, but they do it through this layered model. Maybe? There we go. I knew I could get there. Me and PowerPoint.
Peter Sprygada • 09:13
Um, okay, so that’s actually what we’ve introduced. We actually have this technology working. Come see me afterwards. Let’s talk about just AI in general. Let’s talk about this operating model concept. Let’s look at what we’ve we’ve put together. And by all means, the most important thing I’m gonna tell you here today is keep automation weird.
Peter Sprygada • 09:31
Thank you for your time.