Infrastructure leaders are entering a new era where agentic AI transforms how networks and systems are managed. No longer bound by rigid workflows, IT teams can leverage AI agents to reason, adapt, and act – accelerating productivity, securing access, and enabling strategic innovation. This isn’t about replacing people; it’s about empowering your teams to deliver faster, safer, and more impactful outcomes.
Watch this video, featuring insights from Bob Laliberte, Principal Analyst at theCUBE Research, in an AnalystANGLE discussion with Chris Wade, Co-Founder & CTO of Itential.
Key Takeaways for Infrastructure Leaders
- Evolve Beyond Automation: Shift from fixed workflows to intelligent, reasoned workflows driven by AI.
- Free Teams for Strategy: Automate routine tasks to unlock time for high-value, revenue-driving initiatives.
- Secure AI Adoption: Put guardrails in place (RBAC, NCP server) to control how agents interact with infrastructure.
- Accelerate Innovation with Micro Apps: Use AI to rapidly build lightweight, purpose-built apps for teams -delivered in hours, not weeks.
- Enable Same-Day Change: Meet business demand with faster provisioning and configuration updates.
- Balance AI & Human Oversight: Start with low-risk tasks fully automated; keep humans in the loop for complex changes.
- Future-Ready Infrastructure: Position your organization to securely connect centralized AI models with distributed infrastructure.
- Practical Next Steps: Invest in programmability, orchestration, and AI-ready infrastructure today to stay ahead of the curve.
Video Notes
00:00 Intro
00:06 From Analysis to Agency: Transforming AI Perspectives
02:24 Automation and Strategic Workflows
05:36 Securing Agentic Applications
07:48 The Role of Micro Apps & Vibe Coding
11:32 Reimagining Infrastructure and IT Operations
14:00 Real-World Applications of MCP
18:30 Guiding the Future: Human Oversight & AI in InfrastructureView Transcript
Bob Laliberte>> Hello, and welcome to this AnalystANGLE titled From Automation to Agentic AI: Securing the Future of Infrastructure. I’m Bob Laliberte, principal analyst for theCUBE Research. And today, as the title suggests, we’re going to be talking about agentic AI and its impact on managing your infrastructure. As enterprises experiment with micro apps and vibe coding, one of the questions that arises is how do you orchestrate and secure at all? Well, joining me today is Chris Wade, co-founder and chief technology officer at Itential to discuss how their platform is helping enterprises navigate this shift. So, thanks for joining us today, Chris.
Chris Wade>> Thanks for having me, Bob. – Yeah,
Bob Laliberte>> I mean this is really interesting to me. You’ve been working and developing solutions for automation and orchestration for a couple of years now, but you’re now expanding that conversation and starting to talk more about how you’re enabling agentic application. So, I think I’d like to get started there, if that’s okay with you. And to provide context, how do you see agentic AI changing the way that enterprises are interacting with their infrastructure?
Chris Wade>> So, from an Itential perspective, we’ve really been focused on automation and orchestration of infrastructure. And at its core, this is really a productivity solution. So, for years we’ve talked about how do we have infrastructure move from a very human- centric operating model to a much more machine- based operating model? And when we start talking about AI and agentic AI, it plays directly into accelerating our ability to adopt these new technologies. So, if we think about infrastructure as code and some of the strategies we’ve had over the past couple of years to accelerate the velocity of managing our infrastructure, AI and agentic AI specifically is going to allow us to move from more of a fixed-workflow model to a very reasoned workflow model. And as business logic becomes, I don’t want to say free, but as business logic is generated from these models, I think we can rethink how we completely manage our infrastructure.
Bob Laliberte>> And this is really has the potential to be a significant game-changer. And it’s not about reducing the number of employees, it’s about being able to automate those workflows to free them up to work on more strategic things. Some of the research that we’ve done in the past, for example, shows that organizations leveraging these types of AI and automation will come out and say, initially, it helps with productivity, it helps them get things done faster. What we find when we talk to those organizations that have had it in place longer, what they shift to is that they realize that all that time that they’ve now freed up enables them to work on the strategic initiatives that are helping drive business and revenue growth. So, it’s a cool shift to see that maturity model as people get more comfortable with the technology and they’re able to leverage it’s actually enabling them to help drive the business further.
Chris Wade>> Absolutely. – Yeah, absolutely, right? So,
Bob Laliberte>> that’s a good thing. And as you mentioned, your platform is known for automation and orchestration. And so, talk to us about how that’s the foundation to support secure access for agentic applications now.
Chris Wade>> So, we launched our NCP server in June, and what it really allows us to do is use our existing capabilities where we connect to infrastructure programmatically. And we’ve worked for years on providing self- serve infrastructure. Most enterprises say that 25% of their infrastructure requests can be done without human intervention. And you start to think about adding your NCP server on top and that allows us to expose these capabilities that we’ve been building for years into LLMs. So, if you think about the simple example metaphor is if you’ve ever gotten a token for Google Maps to build an application on top. With that token, you have certain credits and you have certain capabilities you can do, but most enterprises are looking for securing their infrastructure and providing guardrails to enable these LLMs. So, on one side you have the great innovation that’s going to come by using agentic capabilities.
And on the flip side, you have I want to control and put guardrails in place to understand how these agents are going to interact with my infrastructure. So, we really see the platform as a way of connecting to your existing infrastructure, exposing it in such a way that you understand and allow different LLMs that have different levels of access and different agents because we think this is going to explode into a number of agents very, very quickly. And make sure that you can control, through RBAC and other methods, to make sure how those agents can interact with your infrastructure. Most use cases we see today are very focused on reading infrastructure. A lot of the AIOps use cases that we’ve spoken about the past are very assurance-focused, but as we move into more provisioning and config management and making changes to our infrastructure, our focus is going to turn very quickly into securing and controlling what’s exposed to these agents.
>> And that’s always been a big part of these transitions,
Bob Laliberte>> even from a developer’s standpoint, right? It was always about, like you said, infrastructure as code, how are they able to spin up the infrastructure they need? And it was always on ITOps to put those guardrails in place to let them run as fast as you can. And now, we’re doing the same thing for the infrastructure team, enabling them to put these agents in place to be able to automate these workflows, but ensuring that those same guardrails are in place for them as well to ensure, obviously, that there is continuous operation and high performance operation as well.
Chris Wade>> Yeah, and we love infrastructure as code. And as a software company, we use pipelines every day to build our software, but they’re inherently very rigid, I dare say fragile in the sense that they are very regimented in the logic that they pursue. If you start thinking about some use cases in the network, instead of having that very static workflow logic, if I can have capabilities exposed from the infrastructure and then allow these agents to reason through it as best they see fit, we feel like that balance is going to really allow us to both the adoption but also doing it in such a way that provides our IT fellows break comfort and understanding that they can control what these agents can do within their infrastructure.
Bob Laliberte>> And it’s interesting that you bring that up. And you were talking also about how much easier it is, I think, today with these agents now to be able to program that infrastructure and create the reasoning aspect and that reasoning layer in there. We’ve been hearing about the rise of micro apps and vibe coding now. And that’s very different. Remember, several years ago it was all about the operations team need to become full-on developers in order to create and make this infrastructure as a code happen. That seems to be changing now with these micro apps and vibe coding. I’m wondering if that’s how you see this evolving as well?
Chris Wade>> So, there’s always been this balance between, as you said, training network engineers to be developers. There’s been different layers of low-code concepts that have emerged, especially in the orchestration domain. But the interesting thing about the vibe coding and micro apps, and I think it was even highlighted in the GPT-5 release a couple of weeks ago in the sense that we want to use AI for what it’s best at. And some of the analysis so far is that the larger the code base gets, the more complex it is, obviously, the value’s incremental, but when you build smaller applications, it’s extremely successful. So, as you’ve seen with some of the vibe coding apps, I expect infrastructure teams to start building tailor- made micro apps for different teams, whether you’re on the NOC or you’re a provisioning team or even for app owners.
And this is really a tale of what AI enables us to do that wasn’t possible before. And I think it’s really important to park some of our long- standing understanding of what was possible. And I do think that these micro apps are going to be an area that we’re going to see huge adoption across infrastructure teams and app developers in the sense that they want to see what they provision, they want to understand what’s available. We’ve seen some of these concepts and platform engineering and IDPs, but I think this is an area where AI is really going to play a big role in the sense that people are going to be able to use AI to build these micro apps pretty aggressively at a very reasonable cost. And cost being hours and days, not weeks and months.
And then, obviously, that bears the big question is, so people build these great apps, how do we wire it up to the network? Which brings it back to a lot of our discussion. If I can provide those users secure access to infrastructure and understand what controls I’m going to provide, then I really want my end users to go nuts with building awesome applications that drive the business.
Bob Laliberte>> Yeah, absolutely. And I think this is one of those great examples of that quote that’s been going around for a long time, that you’re not going to lose your job to AI, you’re going to lose it to the person who really knows how to use AI. And so, it really is, I think an important moment in time right now for operations teams and so forth to recognize that they’ve got to get fluent and more educated about the AI technologies that exist and how to best leverage them in solutions, like the one we’re talking about right now, that’s going to enable them to do their jobs so much more efficiently, as we said, to enable them to then work on more strategic initiatives for the business.
Chris Wade>> Yeah. And maybe I’ll come back to a few times, but it really about reimagining what’s possible. I mentioned before, but being a software company, even our internal engineering teams, we have to rethink some concepts that we hold dear that we’ve learned over many, many years. And when a technology comes along that changes what’s possible, I really think it’s a valuable moment to reflect on how we’ve been doing things and think of what’s possible. I think about change windows for infrastructure, especially on the networking front. We’ve had maintenance windows in the middle of the night for years because we’re skid burst and the changes can be catastrophic, but what comfort can we bring with 24 7 changes?
What things can we do where the machines are deciding the risk tolerance for changes, we can really rethink a lot of our operations. And I think a lot of these AI discussions talk about building things and like you said, labor offsets, but really, reimagining how we can operate our infrastructure at scale differently. It is an opportunity we don’t have very often.
Bob Laliberte>> Yeah, absolutely. And I think that’s a great way to look at it. One of the other things I wanted to touch upon, you mentioned customers are already using the model context protocol, MCP, and clearly, this shows you just how fast things changes, right? It was released less than a year ago and it’s already gaining a ton of traction. People are using it. I’m wondering if you could touch upon what kinds of applications they’re building or what kind of use cases that they’re targeting?
Chris Wade>> So, I think of the customer segment base maybe at the beginning of their journey, maybe in the middle and people that are more advanced. So, people that are getting started, first thing they think about is really collecting data from the network and visualizing it. So, you might hook up an MCP and you might do some of the what we’ve called AI ops types of use cases. You might want to see what the state of the network is, you might want to see the result of your compliance reporting and you might want to view it in a different way than the application does. So, it’s really about data collection and formatting. So, we’ve all seen chatbots visualizing data super robustly, super unique, replacing a lot of BI type of concepts we’ve had in the past. And in the middle ground, people are starting to think about removing some of their static workflows and starting to allow the LLMs and the agents to start thinking about what to do next.
So, simple things, like upgrades instead of having it very fixed, maybe you allow it to think through the steps that it wants to take and it might do things out of order from the way you thought, but it might do it for very good reasons. And then, our most advanced customers are leaning hardcore into building agents. So, you can imagine different agents of network monitoring agents and network provisioning agents and config manager agents. So, they’re really leaning hard into building agents that do very specific things in their infrastructure for both monitoring and change.
Bob Laliberte>> Got it. And have you talked to any of these organizations that are doing this to find out what kind of outcomes that they’re getting as a result of implementing the technology?
Chris Wade>> So, it’s speed, speed and speed, I’d say. Velocity of change, I think directly correlates to the value of operations. A lot of our customers expect same-day change. So, if an application team needs some compute, they need some connectivity, they expect that the same day. So, that would be the first category. And the second category, I would say, is people are doing things that were not possible with labor alone. So, think of checking every compute instance for certain parameters or checking Helm charts or checking every port or checking every optic. These things just aren’t necessarily possible as you start putting automation and orchestration in place, you can start doing things that didn’t make sense before, like we were talking with some of the vibe coding apps. So, I think a rethink is in order, and I think the most productive customers are thinking through, if we had a clean slate of how we operated infrastructure, what would we do without regards to effort and time, which have always been the largest constraints on deciding how we operate?
Bob Laliberte>> Got it. Got it. That makes sense. Obviously, I talked about that quote about using AI versus being taken over by it and so forth, but one of the things I’ve always talked about with AI is around that time to comfort, making sure you’re validating that things are happening. Organizations have used the closed-loop systems, and now the more popular term is the human in the loop. So, I’m wondering if you could touch upon a little bit, talk about as AI starts to take on these infrastructure changes, how important is that human oversight, that human in the loop factor for organizations?
Chris Wade>> So, this is one topic that I don’t think has changed much from automation and orchestration as we get into agentic, with the exception that I was at a conference maybe this summer and we were talking about the fact that when you have self- serve systems, the number of API calls that you can expect from infrastructure and code otherwise is fairly reasonable with a lot of s spikiness. The interesting thing is when you hook up MCPs and you start hooking up LLMs, they’ll drain every packet and every CPU cycle to do something in the effort to understand the situation, to make recommendations and make decisions. So, outside of the need to have a platform that can handle the increased load as you start to wire in agents is your concept of human in the loop.
And I think most of our research says that about 25% of changes are standard changes in the sense that they don’t require a human in the loop and the other 75% do. So, historically, like I mentioned, this is a big change management type of concept, or at least you need an engineering resource to say, yes, pick the region, pick the site, pick the tolerance level, stuff like that, that the end user’s not going to have. So, I think having humans in the loop is going to be an interesting dynamic as MCP continues. But I think this is an area where we can use our existing technologies to offset these agents and provide context for the responsible party to support the agents in their decision making.
And I really think we should view this human in the loop concept as a lack of information or a lack of context providing these agents because in a perfect world, they could make all these decisions. So, we’re going to be in a long path to provide all of this information programmatically, so LLMs can make full decisions. And as we embark on that journey, we’re going to take low- risk items and let it rip. And then, for things that require context or engineering, I think we’re going to put the human in the loop, and hopefully, do that in the most meaningful way possible and then train off of it in the future, so that that’s less and less as we get more comfortable in an agentic world.
Bob Laliberte>> Yeah, and I’ve seen that progression as well. And in the research that we’ve done, it’s been indicated as well where organizations want the intelligence to start making decisions. But maybe for that first however many number of them, it will be, they’ll either do it manually or they’ll trigger the easy button after they’ve validated it for a certain period of time, they’ll say, “Hey, let’s just move this. ” Like you said, those lower quality tasks, the mundane routine things, “Let’s let them go fully automated and so forth. Just maybe let me know that you’ve done it, so I’m aware. ” But other than that, I think you said, just from that human element of it, there’s probably going to be that transition period as people become comfortable and they validate that it’s doing as it should. And then, once they do, that’s going to enable them to open up to more use cases, more complex tasks as the technology shows and demonstrates that it’s able to handle those.
So, I know we’re running out of time. So, last question. Let’s say we fast-forward in three to five years. What role do you want your platform to play in the agentic world?
Chris Wade>> So, a couple things. So, number one is AI is developed most likely in a very centralized way. So, if we think of the folks building foundational models, if we think a lot about the tooling around it or enterprises that are going to build their own, typically, it’s going to be a fairly centralized way of operating. So, to make it impact our infrastructure in the most broad way, we need to be able to connect that reasoned logic with our infrastructure. And most of our infrastructure is highly distributed. So, thinking about the attentional architecture, the distributed nature in which we can deploy around your infrastructure, the platform guardrails, providing each of the agents and each of the end users controlled access in a secure way.
So, really, the attentional platform, providing that execution layer to connect your centralized AI logic with your infrastructure. And we think there’s going to be tremendous innovation in those ways. And then, secondarily, it’s really about building exposure, via MCP and agents, so that we can help accelerate the AI strategy for infrastructure teams across our customer base. So, we really want to be leaning hard, put AI at our core and really help be an accelerant to what’s possible.
Bob Laliberte>> Yeah, that certainly seems like a good plan. And I know, like I said, the ability to be able to consume AI right now is certainly going to be an advantage for those organizations who are able to deploy it, get widespread use of it across their environments. As a quick follow-up to this, what advice would you give for those enterprises that want to get started, or how should they prepare their infrastructure to get started?
Chris Wade>> So, if I was talking to you a year or two ago when AI was equivalent with like chatbot agents, I would’ve had a much different answer. My guidance now would be that it’s real, it’s here, it’s going fast and I would lean in hard, but I would also understand that infrastructure’s a little bit different than some other systems, whether it CRM or otherwise. And we can start thinking about how we prepare our infrastructure for this agentic world. It starts with programmability, it starts with some concepts of moving from a human-operating model to an automated operating model. And we can really lean on our automation and orchestration capabilities across the industry and what customers already have deployed, so that as you develop your AI capabilities, you’re prepared to wire that into the network and infrastructure more broadly, so you can start taking advantage of it as quickly as possible.
Bob Laliberte>> Chris, I think that was certainly some great advice, and this has been a really fascinating discussion. I know I’ve enjoyed it. I love the fact that you came back to that re-imagining what you’re doing. So, we’re at a point in time where we can leverage this technology not just to think about how we automate the basic processes we have in place today and the manual processes that we do, but to really re-imagine how we’re going to be able to operate, how we’re going to interact with infrastructure and how with the appropriate security and guardrails in place that’s going to enable organizations to move so much faster. So, really, really fascinating information. Thank you so much. Thank you for joining me, Chris.
Chris Wade>> Awesome. Thanks for the time. Good seeing you as always, Bob.
Bob Laliberte>> Absolutely. And thanks everyone out there for watching this and joining us today. If you want to find out more information on Itential’s platform and how it can help accelerate your journey, please visit the Itential website.