How to Ensure Compliance & Prevent Drift with Itential Lifecycle Manager

Provisioning application infrastructure shouldn’t feel like a grind of endless tickets, manual configs, and compliance checks – where one slip can bring everything down. What if AI could become your most reliable teammate, delivering speed without risk?

In this webinar, discover how Itential’s Platform + Lifecycle Manager Application + MCP Server put guardrails around AI so you can unleash its power safely across hybrid infrastructure. You’ll see how Itential transforms provisioning from a high-risk, manual slog into a fast, auditable, and SLA-ready process.

You’ll learn how to:

  • Safely integrate your LLM of choice with Itential MCP: Bring AI into infrastructure workflows without introducing risk.
  • Expose only the right capabilities to AI: Ensure it assists where valuable, but never operates outside governed boundaries.
  • Leverage enterprise-grade controls with the Itential Platform: RBAC, policy enforcement, and full audit trails provide governed, role-based access to infrastructure.
  • Maintain configuration integrity with Lifecycle Manager: Prevent drift, enforcing standards, and ensuring consistency across hybrid environments.
  • Achieve full observability of every change: Actions automatically logged, tracked, and compliance-ready.
  • Accelerate provisioning without sacrificing control: Reduce cycle times while meeting security, SLA, and governance requirements.
  • Demo Notes

    (So you can skip ahead, if you want.)

    00:00 Introduction and Overview
    01:16 LCM Lifecycle Manager Explained
    04:09
    Demo Service Architecture
    06:23
    MCP Server Capabilities
    09:23
    Lifecycle Manager Dashboard
    16:31 Claude Desktop AI Integration
    18:27
     Galactic Empire App Creation
    27:18 Instance Status Management
    29:33 Port Configuration Demo
    35:33 CLI Integration Demo
    38:48 Wrap-Up & Strategic Value

  • View Transcript

    Rich Martin • 00:00

    Hello everyone, welcome again to another Itential webinar. Today’s topic: AI-driven lifecycle management for infrastructure. Now that’s a lot of words, so I need someone to help us go through this. And today I’m welcoming my friend and my colleague, Joxon Flores, a principal SE at Itential. Joxon, thanks for joining us today and really being the uh the big brain and maybe even the big nerd behind all of the AI stuff we’ll be talking about today.

    Joksan Flores • 00:25

    I don’t know about the brain, Rich, but I’ll take the nerd one.

    Rich Martin • 00:28

    Okay, fair enough. So let’s talk a little bit before we get started. Yes, we are going to bolt on an an AI LLM of our choice today into the ITential platform using our MCP server. But let’s start with talking about really what’s behind the scenes that’s really going on on our platform. We’re going to be leveraging a tool or an application we called LCM Life Cycle Manager. Uh Lifecycle Manager, if you want to do a deeper dive into this, we’ve done several uh webinars in the past that you can uh check out from our website or our YouTube channel and uh get a deeper dive into LCM. But the kind of the the nuts and bolts of LCM, it if you think about how networks and infrastructure systems w interoperate today and work together, it’s really complex and and interconnected.

    Rich Martin • 01:16

    And if we take something like an application or if we want to call something a service, that’s usually using some sort of resources or data or some sort of configuration across multiple data domains or infrastructure domains or network domains. So, for instance, if you have an application, it could be using something, uh, some resources like VPCs or VNets in the cloud platform. You might have that application connected over a transit gateway to your data center. Uh in the data center, you might have another back end application or a database that’s using some sort of VXLAN and VLAN mapping that’s connected to a port to a server. And so if you think about an application for it to work, there’s all this interconnected data and it’s related but not directly related. And sometimes it gets lost because it transits multiple domains, and those different teams aren’t necessarily tracking that all in one place. And so this is where LCM as an application on our platform really, really, really helps uh teams, organizations, companies uh track the lifecycle of these different services.

    Rich Martin • 02:22

    So it could be an application, it could be uh even a long standing process for um, you know, deployment or even something like a uh you know, a set. We’ve had we have customers who You know, leverage LCM to track their uh uh the stages of different router upgrades that they’re doing in the network, which can be a long-standing types of processes that take some time, but you want to track all of this data together. And so this is where we tie together infrastructure resources, operational state detail, and even administration information that might not be directly rated related to the technical configurations of the infrastructure, but things like you know, what application is using this? Who’s the application owner? When what date was it put into place? Uh, or if it’s a service provider, you know, who’s the customer, what’s the customer ID, what’s the what’s the date of deployment on this, those kind of things for administration.

    Rich Martin • 03:15

    And so being able to track that together in a single application in our platform, that’s what lifecycle manager is all about. And now, you know, if you’re familiar with Itential, we’re all about orchestration and automation. Now, when we add AI into the mix, that essentially means how do we do this even faster, even better? And what we find is when you tie LCM plus AI. Together in our platform, this really helps to avoid unnecessary lookups, checks for all sorts of things that you might want to do over the life cycle of a service or an application. So everything from provisioning to day two to day end changes, troubleshooting, auditing, LCM plus AI is really going to make this much, much faster and really ends up being the solution. So I’ve said a mouthful there, Jockson.

    Rich Martin • 04:01

    Help save me and show me the reality of what I just unfolded there. Talk to us about what we’re going to take a look at today.

    Joksan Flores • 04:09

    That’s a fantastic description, Rich, right? I think IT complex uh IT uh processes are complex, right? They include a lot of stuff like each set. So, and here we’re taking you know a simpler operational example to deploy an application in a cloud service, right? So today what we’re gonna do is we got a service in AWS. We’re gonna provision a VPC, create a little landing zone, create an instance, but then we have to you know expose that application to the internet. But then there’s certain things that we gotta do.

    Joksan Flores • 04:39

    So from the orchestration standpoint, we’re gonna be talking to tools like infoblox to get IPM um information for that VPC, right? So that we can allocate a subnet that’s you know um documented on-prem. And then we’re also documenting the app its ServiceNow CMDB, right? So a couple things, right? A couple very simple examples, but at the same time, we’re gonna starting over at least three systems. Now, what happens is we create this lifecycle resource. We’re gonna store a lot of the useful parameters from it on the platform, and then we’ll have a lot of information that we can use later to do operational um things on.

    Joksan Flores • 05:12

    So, say e.g. , we can create a VPC in a security group and attach it to that instance, but then we’re gonna remember on the platform what that security group is so that we can later go make changes to expose ports on the application. We’ll also remember the IPs that get allocated to it so that we can do testing and health checks and things like that on the instance. We’ll remember instance IDs and so forth. So from the LCM and the platform standpoint, that gives you a lot of power over that service that you just created, right? You’re exposing a service. Now we supercharge this whole thing when we put an LLM on top, right? Um now what we’ll do is we expose.

    Joksan Flores • 05:47

    We use a lot of the attentional MCP um capabilities to expose tools and LCM resources to the LLM. That way we can actually provide the tools that it only needs to have, right? So we’ll have some dynamic bindings, which is what we use to expose certain workflows and certain tools only to the LLM to minimize, right? And limit the do the whole context management bit. Um, but at the same time, we’re also able to trigger. So we’re gonna be doing everything entirely rich today over the LLM, right? We’re gonna show LCM, we’re gonna look at what’s available, but everything else, we’re gonna manage this whole thing as a service from the LLM purely.

    Joksan Flores • 06:23

    And that’s just to kind of showcase the possibilities of what people can do, right? Um we’ll clean up, we’ll have limits and controls over what how the LLM can interact with the platform via MCP. The LLM will have an understanding of how to validate the inputs for the service, right? It’ll know how to launch it, it knows how to validate it. It’s got all the controls, and it’ll actually know how to go and operate it later down the road so that we can expose. So this, you know, this we’re just gonna do claude today, right? We’re gonna keep it simple.

    Joksan Flores • 06:50

    We’re gonna be using an assistant. Um, but later down the road, you’ll actually see that this is something that can be done via. autonomous agents or these are things that can be executed um via agents that are reacting to triggers, to events, to anything, right? That can be launching on a schedule. Right all these capabilities we offer over the top. And today we’re hope hopeful hopefully, right? If I do my job today, Rich, um, we’ll actually show that.

    Rich Martin • 07:14

    Um I have full faith in you, Jackson. But I I’m super happy that you we’re talking about the very 1st point here is the prompting strategy. One of the keys to what we’re doing at Itential is making making things more efficient, generally speaking. But when it comes to prompting strategy in our MCP server, that 1st bullet point, binding, dynamic binding for workflows to become tools that you feed to the LLM of your choice, and then limiting the LLM tool access to just what it needs. I think this is a key part of building efficiency into, you know, into the NL, the natural language processing LLMs that we want to use. We have to think in terms of there are context limits, and these are the things we have to do as engineers when we start to bolt these things together, is understand that. And you’re going to find a lot of tools in our platform, both with MCP, but also the interaction between MCP and our platform with workflows, turning those into tools that not only make it simple and easy for the reasoning to keep, you know, and to keep context windows relatively clean and relatively not full, but also making it putting in the right guardrails for certain actions and activities, especially when you start creating things on the different network domains.

    Joksan Flores • 08:34

    Yeah, a hundred percent, especially when talking about complex flows. I agree.

    Rich Martin • 08:38

    Absolutely. Fantastic. So I’m looking forward to this. Again, we’re we’re hitting three different. So, you know, one of the key things here is we’re hitting three different uh, you know, uh applications here or or or domains, infoblocks, AWS ServiceNow . But as a platform is known for, it’s multi-domain, multi vendor, multi system, easy to integrate with whatever your ecosystem is. And so with LCM, the kind of modeling you can do for these resources really is not limited to any particular domain, to any particular systems.

    Rich Martin • 09:10

    It’s really whatever you need to bind into a model with the data you would like to save into the model. That’s what’s appeal uh available to you. Um and so go for it. You’re you’re we’re ready to roll, Jockson.

    Joksan Flores • 09:23

    All right. Let’s go do this. All right, here we go. So this is what the lifecycle manager application looks like. Um and in our case, we’re looking at a particular resource, which I’m calling application provisioning, right? Like I said, it’s basically what I described earlier, Rich, which is essentially we’re gonna be provisioning a quick VPC, creating the all the constructs that are needed for the application, right? Your private key pair, your security group, and all those things.

    Joksan Flores • 10:01

    And then we’re gonna create an EC2 instance from a template. But what LCM gives us the power to do is, you know, you just took the create up here, right? This is the workflow that we tie. We launch the application provisioning workflow, and that’s it. Normally, without LCM, you don’t have, you just have to launch the workflow and then you’re done, right? With LCM, actually, we have the capability of modeling, and I’ll show you what it looks like in a 2nd on an old instance that I have. But now, because I model the change after I launch this create, now I have access to all these things that I can design, right?

    Joksan Flores • 10:31

    So I’ve created a bunch of actions in here. We’ll only touch a few of them. But you can start thinking about okay, now that I know all the parameters of my instance, I know what the instance ID is in AWS and so forth, I can do a lot of things, right? Like configuring port access, right? This would be making a security group change to expose a particular port of the app. I can restart the instance, I can start it, I can stop it. I can update the instance status so that I can, you know, update my LCM and reflect the status of that’s showing on AWS.

    Joksan Flores • 11:02

    I can verify my application’s health based on the URL that I will have, right? And I can also delete the application infrastructure later, right? So I got like the capability here to do a lot of just basic crowd operations, but also do day two things on this application.

    Rich Martin • 11:19

    Yeah, and this it’s great to point that out. Like this is uh not only is it just crud operations, but these are the things that you would typically s need to modify, change, update across the lifecycle, and then remove across the lifecycle of this particular service or an application or whatever you’re modeling here. And in the Itential platform, we’re mapping these LCM actions to workflows. And those workflows in of themselves are orchestrations of processes that take automation steps one at a time as tasks that operate on your on you know on the different the different domain infrastructure domains that that you know you you want to operate on or the different systems. And so this is kind of how you build yourself, your your LCM application up by leveraging the work that you’re already doing building these workflows. Now we’ve kind of categorized them into the create, so that’s your day zero, your delete, that’s your day end when you’re taking it out of service. And then all the operations in between that might need to be done as well.

    Rich Martin • 12:21

    Those become workflows that tie to actions that now will operate on a particular instance.

    Joksan Flores • 12:28

    Yep. Yeah. And also, Rich, uh worth noting, you can create as many of these as you want, right? You can create a lot of these. We already have the data. Anyway, so let me just without further ado. Let’s just jump into an example.

    Joksan Flores • 12:39

    Let’s do a lot of the uh one of the ones that we have in here. Let’s pick uh Rich, do you have any favorite um characters, Looney Tunes on or something?

    Rich Martin • 12:47

    You know what I’m gonna defer to the uh to the Star Ward nerd in the uh in the webinar. That’s you.

    Joksan Flores • 12:54

    All right, let’s let’s let’s just pick Wayne Enterprises and uh how about Star Wars theme. Okay, that’s fair enough. Right. Wayne Enterprises is is is actually I am a big Batman fan. Yeah, there you go. So we’ll pick the Wayne Enterprises web application, right? This the super important application, right?

    Joksan Flores • 13:10

    Wayne Enterprises can’t run without it. Um so one of the things that we’re allowed to do here that LCM allows us to do when we create this, right? We created this prior prior to demo, is when we do the create, right, Rich, that day zero you were pointing out. We remember all these parameters from this app, right? So we have a lot of stuff in here, right? I can’t um to create it, I just pass it a few parameters and we’ll see what that looks like. But I’m able to remember now the name of the VPC that was created, the instance, the public IP that gets allocated on the fly, right?

    Joksan Flores • 13:40

    When the application gets created in IWS. And then some parameters about the security group, right? And then a bunch of other stuff, right? Like the zone zider that was allocated on infoblox and so forth, right? So I got a lot of that information even the initiator that was me that created it. But moreover, now that I have all these parameters, I can build those actions that we were looking at, Rich. They’re all available for this instance.

    Joksan Flores • 14:02

    So now I can do all the update instance tag just for this application. I can do stop, I can stop it, I can start it, I can verify the health. And there’s also a lot more stuff, right? Like we could do instance grouping, so I can create, so I can do these operations in bulk. If I need to stop them all, I don’t know who would do that, right? Probably not a great idea. But if I wanted to stop all the apps in one go, I could do that as well.

    Rich Martin • 14:23

    Yeah. And then we also have I could see a panic button, right? Maybe not for not this for this one, but yeah, that that could be done. Absolutely.

    Joksan Flores • 14:29

    It could be that we’re doing a DR event, right, Rich. Like what if we’re doing a DR event and we’re shutting down everything in the US East one? We can create an instance group for the East stuff, and then um we just go and shut them down, shut them all down and do a DR. So there’s also the the history of the instance, right? So one of the things that we do in LCM is we provide historical data. Of the changes that this instance goes through. So I’m going to pick one of these that’s the most impactful here.

    Joksan Flores • 14:58

    So when we expose ports, we’re actually able to remember what changed in the data model, right? So this is obviously JSON here, but we’re able to change, right? You see that before now, before the exposed ports was called, our inbound rules were empty. So we didn’t have the application wasn’t reachable. And then we were able to create open up port 4000. And consequently, this is what we’re going to do today. We’re going to demonstrate this one live.

    Joksan Flores • 15:25

    So how about we go and do that, Rich? What do you think?

    Rich Martin • 15:27

    Yeah, no, that’s great. Let’s move forward. This is really cool stuff. And while you’re getting that set up, I mean, just me thinking out loud. Um Just the fact that we’re holding on to this instance data, I’m already seeing the fact that you’re saving me tremendous amount of lookups into other systems. That’s the that’s the most obvious benefit here.

    Rich Martin • 15:48

    And when do I need that the the most, maybe selfishly as a network engineer, when I need to troubleshoot something? But then at the same point, I saw that you had an action that can help verify the health. So even then, that piece gets automated. And if it helps me save time, if I were to do that manually and I just had everything together, that saves me time. But now we’re talking about not only having things together, but workflows that can automate the stuff I would normally do. And now we’re talking about putting that in the hands of an AI to assist us.

    Joksan Flores • 16:19

    The world is changing, Rich. This is uh who knows where we’re gonna be in 10 years. But for now.

    Rich Martin • 16:26

    Okay, so now now we’re in now. I’m in unfamiliar territory. What are we looking at, Jackson?

    Joksan Flores • 16:31

    So this is Cloud Desktop, right? Like I said, um, you know, for all intents and purposes, this is gonna be our agent. Okay. Um, so not just your standard AI assistant, right? We’ve actually one of we’ve done a couple things to this cloud desktop instance, right? One of the things that we’ve done is we’re on inside a cloud desktop project, which lets you manage and actually add a system prompt that is, you know, that I can tailor towards uh the the type of things that we’re gonna do, like LCM resource management. I can tell, hey, when you launch a workflow, we’ll monitor it to completion, right?

    Joksan Flores • 17:03

    Things like that. And then we’ve also curated the tools that we have. So in here, we’ve actually we’re talking about those dynamic bindings early, early, early, right? And the tagging that we provide on MCP have actually restricted the amount of tools that our MCP provides to the LLM. Um, you know, I’m a big context management nerd, Rich. You know this. I put plenty of content out there about this, but there’s no reason why when we’re provisioning things, when we’re provisioning applications, we need to have a get health applic, you know, get health mcp tool for the platform, right?

    Joksan Flores • 17:33

    We’re not managing the we’re not restarting adapters, we’re not doing any of that. We’re doing you know, LCM things. So we’ve restricted the amount of tools that are available for the job that we’re doing so that we can keep the LLM honest, minimize hallucinations, and keep everything clean. So far, so good.

    Rich Martin • 17:50

    So far, so good. Yeah. Your agent is your buddy, but your agent needs to be limited in what your what it can do just in case.

    Joksan Flores • 17:57

    Yeah, yeah, yeah. Okay, so let’s get ahead going with this. Um, if I can type, I cannot type. Let’s see. Okay, create an application. Uh, we’re gonna call it um which what should we call it? Let’s see.

    Joksan Flores • 18:09

    Uh we’re gonna call it Star Wars guy. Yeah, let’s do the Star Wars thing on fire. Um web app. How about that? Uh we’re gonna call it a Galactic Empire web um app. Um we’re gonna do it in the dev zone uh zone and um And I think that’s it.

    Joksan Flores • 18:27

    We’re just gonna do that for now and see what happens. Now, one of the things that we should see is the LLM should know what are the parameters to trigger this new application creation. So if I’m missing something, it should let me know. Oh, here we go. Okay, so I’ll help you create the app. Um, before I proceed, I need a couple more parameters. Okay, so it needs to know, Rich.

    Joksan Flores • 18:47

    Look at that. It does parameter validation based on the LCM service that we’re provisioning, right? So it needs to know if we’re gonna provision it in West one, and then it also needs to know what application template. And these are the options that are provided. Perfect.

    Rich Martin • 19:00

    Right.

    Joksan Flores • 19:00

    Let’s do so.

    Rich Martin • 19:02

    How did it reason that out?

    Joksan Flores • 19:04

    So one of the things that we have on our platform is we’re able to restrict the triggers that we publish to the LLM, and we’re also able to provide the schema right from the platform that we provide on those services. So now the L can actually pull the schema, read it, and now have constraints because otherwise it’ll just guess, right?

    Rich Martin • 19:24

    There you go. Okay. Okay. So this is something that we’re we’re we’re we’re making available within our platform through MCP in order to help again build some guardrails, some limitations, and actually make it a lot more efficient as well. Yep. So this is very similar to what I think about, like when we do decorations for things and where we limit what can be like in a drop-down box. We enumerate those things.

    Rich Martin • 19:47

    So here are the five selections that you can select. Maybe they’re router interfaces or router names or things like that.

    Joksan Flores • 19:54

    Exactly the same thing, Rich, right? This is like the LLM. When you provide, let’s say an user, you provide them with a form and you provide them free for free form text, they’re gonna type whatever. But if you provide them a drop down, they can only pick from the drop down. Same thing with LLM. And in the platform, we provide all the tooling for you to do this very, very, very seamlessly without having to do all sorts of crazy coding and stuff like that. So all right.

    Joksan Flores • 20:18

    So I’m gonna go ahead and continue. I already typed pre-typed in what we were gonna do. Um I’m gonna do demo with TB. That way we can actually see the DB and we’ll actually see the front end and stuff like that. And then we’re gonna provision in West one. Um, you know, good choice. Yeah, yeah, yeah.

    Joksan Flores • 20:33

    Of course. Yeah, we don’t want to use East One because if we do East One, it’ll be a lot of stuff in there. Let’s keep it clean on the West, which is empty. And then the other thing that I’m doing, just you know, kind of a minor thing, but I’m not providing a devalues verbatim, right? I don’t have to. The LLM should know what to do. And it did, right?

    Joksan Flores • 20:49

    It already picked all the parameters. So look at that. It’ll launch the application. Call it Galactic Empire web and it picked all the parameters perfectly and the job is running. So applications being launched. This is now provisioning your Galactic Empire app and it’ll do the network allocation, cloud network creation in AWS. It’ll create the compute instance and the CMDB record, Rich.

    Joksan Flores • 21:16

    So let’s go and do actually go to here. Oh, look at that. It just refreshed now.

    Rich Martin • 21:20

    Okay.

    Joksan Flores • 21:21

    We’re already look at that.

    Rich Martin • 21:23

    The Empire is on its way.

    Joksan Flores • 21:25

    The Empire is coming. Um behold. So now let’s go ahead and wait. Now, one of the things, Rich, that I talked about system prompt earlier, is the LLM has been instructed to monitor the job every 15 seconds or so. Obviously, this is a webinar, right? The the the viewers are not gonna wait here for hours. We could tune this to be an X amount of time, right?

    Joksan Flores • 21:49

    There are certain things that we’re doing here, right? We actually gave it the current time the current time tool, which is the time MCP server. LLMs are not normally aware of time. So we’re playing some, we’re doing some things. We’re not playing tricks, right? These are all MCP tools that are available. So we’re giving the LLM the tools that it needs in order to accomplish the job.

    Joksan Flores • 22:07

    Um, but we can do, you know, we can modify these things. Okay, so it did. It seems like it did, and now it’s summarizing the thing. Um, job completed, total runtime 62 seconds. And look at that, Rich. It even has an SLA of 3 min . Isn’t that great?

    Joksan Flores • 22:23

    And now it’s finished successfully. Let’s it’s retrieving the instances. So it knows that because it provisioned it, and it was via the platform trigger, we’re actually able to tell the LLM, hey, this is an LCM service, by the way. So it’ll actually know to go and instruct and go get the resources and describe them all. So now it’s gathering information about what it provisioned. So let’s let it go through its thing here and see why.

    Rich Martin • 22:50

    Yeah, so so because it was using our the the integration through our LCM application. Now LCM is captured on the on the initial create action. So it ran a workflow that was part of the create action. It the LCM application itself gathered these details. And now that the instance has been created within the LCM application, as we saw, it started to build that out. Now all of that data that is unique for this instance for the Galactic Empire web interface, um is all right there for the LLM to query in one place.

    Joksan Flores • 23:25

    Yeah. Right?

    Rich Martin • 23:26

    Okay.

    Joksan Flores • 23:27

    Yep. And look at that. It actually has so we passed it, what the 1st four parameters I want to say, right?

    Rich Martin • 23:32

    Yep.

    Joksan Flores • 23:32

    But look at all this stuff that it’s got. Like we saw before, and this is not magic, right? It’s just pulling it all from LCM. It’s got the VPC name, which it provision using the name I provided. It’s got the instance that it allocated, that’s dynamic from AWS. It’s got the EC2 instance name that the state is pending, right? It’s probably booting up or something.

    Joksan Flores • 23:52

    And also it’s got the public IP, the private IP, the AMI, like da-da-da-da-da. Right? All this other stuff, right? Even the allocation, disallocated subnet that came from InfoBlocks, right? This actually comes real time from Infox. Right. Right.

    Joksan Flores • 24:04

    And then the ServiceNow CMDB ID. So we got records, right? Just think about all the stuff. I always say, right, when I talk to my customers, is it it takes five, you know, five to seven Chrome taps for somebody to do a job, right? It’s not just the CLI, it’s not just the AWS console. So all those stuff that we did there are all saved. And now it also said it also told me, hey, do you have available actions to perform on this instance?

    Joksan Flores • 24:28

    Right? You can start it, you can stop it, you can have all these things. And then it said the instance is currently in a pending state. Which means it’s starting up. Once fully initialized, it’ll transition to a running state. Let’s go ahead and I want to look rich before we do that. I’d like to go and show you actually what it looks like in LCM again after I complete it.

    Rich Martin • 24:46

    So I think actually what that was trying to tell us is not everything is fully operational.

    Joksan Flores • 24:50

    Yes, correct. Yeah, not yet. Not yet. Not everything happens as quickly and once so the empire’s not here yet, Rich.

    Rich Martin • 24:57

    It’s not completely out. Sometimes it’s not operational and really is.

    Joksan Flores • 25:03

    So look at that. But so now we go look at the history and we got all the properties in here. Okay. We got all the stuff. We got pending. We’ve got all those things. We got all the data.

    Joksan Flores • 25:12

    And we go in history and we’re able to see everything that it provisioned as a raw object as well. And also we can link to the workflow too, Rich, if we wanted to. We don’t want to look at workflows today. We’re AI people. So let’s not.

    Rich Martin • 25:24

    That’s right. Today we’ve got an AI hat on. But this is great just to be able to see on the back end. What the LCM application is doing that now allows us to use the AI to not only get the data but to reason through it. And so something like this, if if we’re saving all of this stuff, you know, every time you create, that’s great. We need all that data, but what about on updates? What about, you know, over time, the day two plus all of those changes are being saved.

    Rich Martin • 25:49

    LCMs are, I mean, AIs are are great at you know taking lots of data and giving us summaries. And a lot of times what we need to do is quickly understand what was changed, when was when was it changed? That’s the 1st thing, right? If I’m a network engineer and somebody says, hey, this is broken, I’m gonna say, what was changed? Right? That’s the 1st thing I ask. What was changed?

    Rich Martin • 26:10

    When and when was it changed? And and then, of course, who changed it.

    Joksan Flores • 26:14

    100%.

    Rich Martin • 26:14

    And uh sometimes it’s it was you.

    Joksan Flores • 26:17

    Yeah. Oh, yeah, that’s right. Yeah, might have been me at 2 a.m., right? I was very toxic. That’s right.

    Rich Martin • 26:22

    So this this keeps us honest for sure, as people, as human beings, but more importantly, this allows AIs to iterate through lots and lots of changes to help us understand, you know, any kind of troubleshooting, or or just the state of where something’s at. Um the more data that’s available, the faster that we can leverage AI to kind of parse through that and summarize it for us. That’s what it’s excellent at doing right now.

    Joksan Flores • 26:47

    100%. All right, so let’s go ahead and go to the next thing. So we looked at the instance, it looked at all the stuff that it created. Now it should by now it be in running state. But let’s see. So let’s go and do the what do we want to do? Um it said we okay update a template tags.

    Joksan Flores • 27:07

    Um let’s go and update the instance status and see what happens, right? Because it wasn’t pending. So let’s go update the instance status. Let me be nice to the all of them.

    Rich Martin • 27:18

    You’re not supposed to be nice to them.

    Joksan Flores • 27:20

    Yeah, gotta be nice to the AI overlords for when they take over, Rich.

    Rich Martin • 27:24

    I like your strategy.

    Joksan Flores • 27:27

    Let’s go and do that. Let’s see what it does. Okay, so now Rich, look at that. So we told it update the instance status. It has it in context, but it also knows that in LCM it’s got all this data, so you can just pass all that in. So let’s see what happens there. Okay.

    Joksan Flores • 27:49

    Let’s see what it did there. Okay, look at that. It had a fail, it even had a failure and figured it out on its own. Okay, so it shouldn’t include the instance description. Perfect. But it did it. So it ran the action.

    Joksan Flores • 28:05

    It’s been launched. It’s got a job ID. Now we’re waiting, it’s running. See what happens. Okay, perfect. Look at that, Rich. So the instance has transitioned from pending to running.

    Joksan Flores • 28:33

    So it gave us the current diff. It gave us everything that we needed to know. So now we got um it says your galactic empire application is now fully operational and running. It’s now accessible at the public IP address, blah. But here’s the thing, Rich. Now it says, would you like to perform other actions like configure the port to a lot specific now? You and I, as the smart networking engineers that we are, we know that we need to configure the port access, right?

    Joksan Flores • 28:58

    Because otherwise it won’t. Yeah. So let’s go into that.

    Rich Martin • 29:02

    Yeah. Otherwise, what’s the point?

    Joksan Flores • 29:04

    No, um, yeah, because I’m so smart, and because I didn’t work on this demo, wink wink, I know that the app exposes port 4000. So let’s go into that.

    Rich Martin • 29:12

    Okay.

    Joksan Flores • 29:14

    How did I know that, Rich? I don’t know.

    Rich Martin • 29:17

    You have the plans to the Death Star. That’s how you knew that.

    Joksan Flores • 29:19

    That’s right. Ah, like that’s how you did that. Let’s uh expose uh to the internet. No, we’re doing dangerous things here. We’re going to expose that port to the internet.

    Rich Martin • 29:33

    It’s only dangerous because you’ve given it the tool, but the tool is actually a workflow that has all the guardrails and checks. So technically, it’s not as you know, we got to make changes. How do we make changes safely? This is probably, you know, instead of giving giving direct access to AI to you know all the keys to the kingdom, if you will. Um this is a great way to do it, right? This is the this we think this is the best way to do it today.

    Joksan Flores • 29:58

    Right.

    Rich Martin • 29:59

    Um, so yeah, it gets a little dangerous, but with the guardrails in place, you don’t really have to worry.

    Joksan Flores • 30:04

    That’s a good point, Rich, too. One thing that we should probably say is, you know, I said, you know, we’re doing dangerous things, but like you say, we’re actually doing this on a workflow that’s deterministic. So my workflow right now is super simple. It just modifies the C SG and exposes the port. But I could also have lots of guardrails on that and say, hey, exactly. How about when we expose this app, let’s make sure that that, you know, what do you call it? It it matches the company policy, right?

    Joksan Flores • 30:34

    Or let’s go and make sure that it passes my compliance that I design on the platform itself. Yeah, I have lots and lots of things I could do, right? One, maybe I say, hey, maybe 4,000 is an ephemeral port. I shouldn’t be exposing that to the internet. It should have a little balancer. And I could do that, right? I could actually hang out on the workflow and report back, tell LM, hey, no, you can’t do this.

    Joksan Flores • 30:52

    So, you know, there’s lots and lots of things that we can do, and we provide all that capability. Okay, so look at that, Rich. Our app is it seems like it’s uh reachable, is now accessible from the internet. So let’s go ahead and test that out real quick.

    Rich Martin • 31:06

    Okay.

    Joksan Flores • 31:07

    See what happens. They got an URL. Wow, look at that. All from the LLM. And one thing that we could also do is look at the lifecycle manager. And when we go and look at, let’s see. Oh, we got the history.

    Joksan Flores • 31:23

    Look at that. So we got the history of all the things that we did. We got the provision, we got the update, and we got configure that port access that we talked about earlier. So look at that. On my same app, I got all that history on everything that went on. Now the last thing that I want to do is, and the LLM is giving me some recommendations. This is the most um fantastic or amazing thing to me is when you provide when you create all this tooling.

    Joksan Flores • 31:48

    I don’t really have to instruct the LLM into the things that I could do, but it also infers logic from all this procedure, right? If you look at it, it says, hey, you can now access your application from the public IP. We did that. But also it says, hey, would you like to configure verify the application health programmatically? So now, Rich, maybe I I want to have this verify health on a schedule on the platform. It says, hey, let’s go check this app a couple times a day just to make sure that everything is good. So let’s go verify the health.

    Rich Martin • 32:14

    Okay. This this also reminds me of, you know, my night, you know, network engineering nightmares where you go, everything is up, but you don’t actually check if it’s up, but you tell everybody everything’s up. And then they and then the end user comes back and goes, Yeah, it doesn’t work. And you missed something, right? So to me, like having the workflow that’s now attached to an action that you can just simply say, yeah, go ahead and verify that, please. And then being able to run it whenever you or somebody else needs to verify this that’s pretty cool.

    Joksan Flores • 32:45

    Rachel giving me nightmares.

    Rich Martin • 32:47

    Um I know, I know, trauma, PTSD. I get it.

    Joksan Flores • 32:50

    That better call it A10, but yeah, to your point, right? Maybe you have this as part of the the knock troubleshooting tooling. Say, hey, go and verify the application health, because the user might be complaining, but it might be their issue on the laptop. So let’s make sure the app is up. But look at that. Um it launched the the verified application health. It launched it successfully.

    Joksan Flores • 33:09

    It completed. Oh, where am I at? It’s completed. The status of the instance is running, right? Because we know we reported that. But also it verified everything and says the status of the application is healthy. So our workflow went ahead and tested our LCM action, rather, went ahead and tested the application endpoint itself on that URL here, right?

    Joksan Flores • 33:29

    That public access URL, which now we have it also as part of the instance on our port 4000, and it’s running and healthy. So the app is all good to go. Rich, what do you think?

    Rich Martin • 33:40

    No, I think that’s fantastic. I mean, and and we’re breaking this down step by step, but clearly this could have all been combined into a bigger prompt, right? Oh yeah. And so, but but for the sake of the demo, like this this is stuff that you know we’re just exposing one at a time. But ultimately, if we’re talking about you using an AI agent to help augment us as network engineers, you know, we we can get even more efficient and faster uh by by strategic strategic prompting like this. But it all is important to know that you know the tooling behind it is available that you can run what’s available. So it has access, it it immediately told us we’ve created this thing.

    Rich Martin • 34:17

    Now here’s what you can do with it, right? So you you you stepped us through. LCM, you stepped us through the creation of a new service. The AI was able to leverage all of those backend tools and workflows. You showed us the day two stuff, right? Let’s now add something to that existing service. Now you’re, you know, I guess along along the the lifecycle of this, you you’re going to need to troubleshoot, health check, things like that.

    Rich Martin • 34:46

    Or to your point, this could be something that was scheduled so often just as a preemptive verification versus something that’s hey, it’s down, run this health check, and let’s figure out what’s wrong. Could be used in both ways. What about something in terms of like uh, you know, we we saw the back end data of changes. What do we have that can help us kind of leverage the AI to give us a summary of changes? And I know you’re a CLI guy, and and I’m a CLI guy too. And I know like we’ve been using Claude Desktop. Can you show us something even fancier for the nerds out there?

    Joksan Flores • 35:20

    I don’t know about fancier, but it is right down my alley. But yeah, let’s let’s do that. Let’s see. We got oh, we got iTunes right here. Okay, so I just have iterm open. Let’s uh let’s launch Cloud on the on the CLI. Now, this is where I live, Rich.

    Joksan Flores • 35:33

    You know me. So this is this is kind of what I like. So let’s go ahead and let’s see. Um you wanted to see the history, so let’s go ahead. And we’ve been doing a lot of the desktop stuff, right? Do you write a lot of the pretty things on the desktop agent, but we could do this from the CLI as well. We have the MCP on boarded in here as well.

    Joksan Flores • 35:50

    So let’s go ahead and do that. Um let’s uh let’s look at the application history. History off the Galactic Lactic Empire web application and provide me a summary. Let’s go do that. No, it’s booping.

    Rich Martin • 36:14

    It’s definitely booping.

    Joksan Flores • 36:16

    Yeah. Cloud code does a lot of very interesting words when it’s thinking. But yeah, let’s see. Let’s see what happens. Look at that. Let’s see. Okay.

    Joksan Flores • 36:25

    It’s going, it’s thinking, it’s doing its thing. It’s also charging us a lot of tokens, but let it do its thing. So now it’s actually figuring out. Okay, it’s just finding the app. Okay, I found it. So I do like this, Rich. I like my green colors and my oranges.

    Joksan Flores • 36:42

    Especially when you go for the colour.

    Rich Martin • 36:43

    It’s very comforting to me.

    Joksan Flores • 36:45

    I feel like I’m in the matrix. It’s soothing.

    Rich Martin • 36:47

    That’s right.

    Joksan Flores • 36:49

    Okay, look at that. So now we got let’s see application history. Here’s a complete life cycle. Um we provision it. Okay, so they did the create rich, just like we did before in the CLI on on the cloud desktop. We update the status from pending to running. We opened the port and we did a health chip verification, and here’s all the stuff.

    Joksan Flores • 37:08

    And uh everything completed or no errors. Um, and uh the application is operational and accessible. Now we could do all sorts of other things, right? We could stop it, we could restart it, and uh, and for everybody out there, right? Those actions are not limited, like I said, Rich. You could create however many actions you want and expose them um to the LLM and and uh and do all these kinds of things. Like you said, all even in one prompt we could do it.

    Rich Martin • 37:32

    Yeah, no, I love this because uh this is where I mean, quite honestly, a lot of the folks we talk to, this is where they live, right? That like yourself. This this is where we live. You’re not limited, obviously. Our MCP server is going to be leveraged. Well, we’re showing Claude, but anything that supports MCP, but all of the tooling and all of that is available to us um from the platform via the MCP server. I think this really brings us to the end of the demonstration, which I think is uh it was pretty amazing.

    Rich Martin • 38:01

    Um, how to leverage LCM, right? Why LCM is important for our customers and um um prospects, how we can not only make You know, you know, you go from you go from just generally provisioning things through automation to start to think in terms of, you know, you want to think of it as a journey. Think in terms of your services, even if you’re an enterprise, you know, what you’re offering is services, maybe sometimes products, if you will, but definitely services. An application is is something you’re supporting with a number of resources and services. And when you’re able to track those like you can with LCM, you’re just building more efficiency into how you manage that over time. And I think you’ve illustrated this really, really well for us, Joxon.

    Rich Martin • 38:48

    Thank you so much. The create, the update to the service, we need to add more ports. You know, over time there could be a plethora of different changes that need to be made. We’re using a very specific example here, but really the you know, open your imagination to whatever your your organization is is providing in terms of you know uh infrastructure resources to support application services. If you’re a service provider, what are the different pieces and parts of your network that are kind of tied together, loosely bound together to provide a specific service to a customer? When you start thinking in those terms, this is where LCM really starts to shine as a must-have tool. And think about all the data that’s not just technical data, but it’s operational data, it’s administration, it’s administrative data, it’s you know, ticket numbers, it’s you know, it’s uh customer IDs, it’s uh bandwidth, you know, uh orders and things like that, or or or properties of a particular service that were ordered by the end user.

    Rich Martin • 39:51

    Those things change over time. We have to troubleshoot all of those pieces and parts at some point. And if these things live on for years and years and years, imagine how much more difficult it comes when you need to pull though all of that, all those threads together to do a troubleshoot. Right? That’s that’s hard. And now with LCM, those are all kind of tied together as a as an instance based off of a model that can be created. It’s super flexible.

    Rich Martin • 40:17

    And now when we add the ability to leverage all of these as tools with our MCP server, what you’ve got is really the killer app for our LCM application when you tie these things together. Because now what what we’ve shown today is that you know, not only is it capable of reasoning out to use the tools, but over time, it’s it’s it’s capable of reasoning out things like troubleshooting. Things like you know uh allowing you, you know, allowing you to audit and report over time that particular service, which may be super critical uh if you’re in certain industries, right? If you’re in uh healthcare or finance or government, things like that. So all of these really, really are super uh exciting when we talk to our customers, especially around you know, the shiny new technology that everybody wants to leverage is AI. And you’re having lots of these conversations, Joxon. We want to really provide seriously useful tools to leverage with A with AI without exposing unnecessary risk.

    Rich Martin • 41:22

    And I think what you’ve to what you’ve showed us today really helps to get our customers and any prospects pointed in the right direction on really how to use and leverage everything together in a safe and effective way.

    Joksan Flores • 41:33

    Yep, 100%, Rich. I think you know, we’ve been preaching a little bit when we talk to our customers. We talk about there’s two components of the AI strategy, right? One of them is the reason, right? That’s the LLM. And the other one is the deterministic. We provide the deterministic and we expose it.

    Joksan Flores • 41:49

    Let the LLM do the reasoning. We’ll take the deterministic, we’ll do a little bit provisioning, and we’ll be your gateway to the infrastructure.

    Rich Martin • 41:56

    Well thank thank you so much Joxon. I really appreciate all the hard work you put into this. Uh you get the uh gold star nerd of the day. Um, I appreciate it. Thank you so much. And thanks for the audience for joining us. Uh look, we look forward to even more cool uh technology, especially around our platform and AI and what it can really do to unlock efficiency and even more potential for your for your business.

    Rich Martin • 42:21

    So again, thank you very much, Jockson, for joining us. Thank you, Rich, for having me.

Watch More Itential Demos