Automate Patch Server Operations with Governed AI Agents Using LCM

In this demo, Principal Solutions Engineer Joksan Flores shows how to use Itential FlowAI with Itential Lifecycle Manager to enable agent-driven patch operations across the full infrastructure patch lifecycle – from readiness assessment to patch execution, decision reasoning, and change reporting.

This is Part 2 of the Patch Readiness demo series. Watch Part 1 here.

This demonstration walks through a two-agent architecture that separates evaluation from execution, reducing operational risk while increasing automation velocity:

Agent 1: Patch Readiness & Lifecycle Modeling

The first agent runs readiness checks and models patch state in LCM:

✓ Checks patch status across inventory and runs pre-checks only where needed.

✓ Summarizes outcomes and updates/creates LCM instances with readiness + key metadata.

Agent 2: Patch Execution & Change Control

The second agent uses LCM state to execute patches with guardrails:

✓ Patches only eligible systems, then evaluates post-patch reboot requirements.

✓ Creates ServiceNow change requests + updates LCM state to reflect outcomes.

All execution remains deterministic – agents reason over approved workflow outputs only, not raw logs or uncontrolled context.

How It Works: From Patch Preparation to Patch Completion

1. Run patch and readiness workflows

Automation workflows:

  • Assess patch requirements
  • Run readiness checks only where needed
  • Produce structured outputs for downstream agent reasoning
2. Apply agent reasoning with guardrails

Agents analyze results, apply decision logic, and map outcomes to lifecycle models, without direct access to uncontrolled data.

3. Execute patches and manage change

Qualified systems are patched automatically.

If a reboot is required, a change request is created and tracked, and there are no manual handoffs required.

4. Communicate results automatically

Stakeholders receive:

  • Concise Slack summaries
  • Clear patch and reboot status
  • Continuously updated lifecycle state for audit and reporting

Why This Approach Matters

Deterministic Execution with Agent Intelligence

Agents don’t improvise automation.
They execute approved tools in a fixed order, combining reliability with AI-driven decision-making.

Controlled Context, Lower Operational Risk

Only structured job variables and workflow outputs are exposed to agents, reducing hallucination risk, token sprawl, and unintended actions.

Lifecycle-Aware Automation

Patch status isn’t just reported – it’s modeled, tracked, and updated over time, creating a persistent system of record for operations.

All generated automatically from the run output.

Built-in Governance & Traceability

Every agent action, workflow run, and state change is visible in Itential Lifecycle Manager, supporting audit, compliance, and operational confidence.

  • Video Notes

    (So you can skip ahead, if you want.)

    00:00 Introduction Linux Patch Agent
    03:24 Patch Execution Agent Overview
    05:22 Lifecycle Manager Instance Creation
    06:51 Patch Agent Logic
    09:12 Ansible Patching Tool Call
    10:06 ServiceNow Change Request Creation
    12:19 Wrap Up & Agent Tracking

  • View Transcript

    Joksan Flores • 00:00

    Hi everybody. Last time I had recorded a video about this agent right here, the Linux patch prep agent. And the whole idea and thought behind this agent was to execute a patch report and some pre-checks on some Linux servers and then just uh render an HTML report that would notify people and our users and the agent consumer of the status of these servers, right? How many updates do they have, critical services running on them, etc. So I went through and I was actually looking through some of the possibilities and the ideas of kind of operating this, and I came up with this, this the next thing, right? So going into and looking at okay, how can we break down this process to see if we do the patch prep and then we do the actual patching itself. So I have two agents today to show you here.

    Joksan Flores • 00:56

    And if you look at there’s a prefix here that says LCM and let’s let’s look at the the initial agent, which is the preparation agent. So I have redesigned the process a little bit, and rather than just being reporting, we’re actually gonna go and execute the patching uh pieces itself as well. But what’s different now with this setup is We got a um patch prep agent here. Um, I actually gotta uh redo this uh system prompting. I just realized it doesn’t have it. Let’s go ahead and fix that actually.

    Joksan Flores • 01:27

    So let’s go. Patch prep. So it’s a patch prep agent. Here we go. So what this agent will do is it has three steps total in its uh set of responsibilities. The 1st thing that it’ll do it, just like the previous flow, and I’ll link it here. Uh there was a separate video done for that.

    Joksan Flores • 01:45

    It’ll run the patch report and execute the prechecks. That it’s all one tool, it’s a workflow in the attention platform. I went into it in kind of detail. Um we’ll go through it during the execution here, but I don’t want to focus too much on the tools themselves and the workflows. I want to focus on the two agents and kind of what they’re doing. So agent number one will run the report and execute the pre checks, we’ll extract the data, it’ll evaluate what servers need patches, and then uh the status of the pre checks. If they fail or pass, then what happens is we’re gonna be integrating Lifecycle Manager and Itential.

    Joksan Flores • 02:21

    Lifecycle manager and attention, it’s used for instance and resource modeling, and in those resources, we get to create instances. So, what the idea here will be is that we’re gonna model the patch status of these servers of these VMs, these Linux devices, and then we’re gonna capture certain data as you know as part of those instances. So, as part of step three here, we’re gonna be synced into LCM. We’re gonna get all the instances. The agents, it’s all gonna do this during its own top process, right? There’s no workflow for that. There’s the workflows that do the executive the particular tools, but the logic is all done by the agent, the reasoning.

    Joksan Flores • 03:01

    We’re gonna provide all the instances available in LCM, check if the server exists. If it’s not found, then we’re gonna go execute this table, this decision table. If the instance is not found, we’re gonna create it. If it exists, we’re gonna update it. If it exists as no changes, then we skip it and then we’re done. That’s the responsibility of this agent. Um, so let’s go ahead and run it.

    Joksan Flores • 03:24

    This will take a minute to run. Once we start executing this, we should see some stuff going on. Yeah, here we go. So the 1st tool call goes through, and that’s going to be this workflow, which we looked at last time. Same identical workflow. And I’d say last time as you know, this was done on the previous reporting. Uh see if I can collapse this here.

    Joksan Flores • 03:42

    I can’t quite get to that of my screen. Uh collapse. Here we go. There we go. So we went through this last time on the previous recording, so I’d recommend watching that. Essentially, what this will do is it’ll do the patch check on all the hosts that are in the inventory. The inventory uh exists in a repo, it’ll be pulled all real time by the service execution.

    Joksan Flores • 04:03

    That’s all part of the other the the the other uh explanation. We’ll get a patch report, which will be used. Um the re uh the patch report is not useful here, but we’ll just have it, it’s the same workflow, so we’re just running through all of it, and then we’re gonna execute the server pre checks. Once that’s done. We’re gonna go here, and that is my agent. There’s another somebody’s doing something else. Uh let’s go if I can go back.

    Joksan Flores • 04:28

    Let’s just go back here to our active jobs. Okay, so I want that active jobs there. Okay, so let’s go here and see. We got a lot of stuff that went through. What I want to know is we went through those. We went through the tool call runch patch reports. We got a bunch of information back.

    Joksan Flores • 04:46

    Uh okay, here we’re getting we here. We start getting into the interesting stuff. So this is the analysis of the data. Step two extracting results, servers needing patches. So the agents has identified that there are three servers that need patches, right? Just like last time, just different numbers because I did patch those uh during testing. Uh needs patching seven updates, ten updates, one update.

    Joksan Flores • 05:06

    Um, all the pre checks have passed, and now we’re gonna go syncing uh I’m doing the um the creation in LCM. Yeah, I want to do it all three tools. It should have on my tool history here. And everything is good. So here we go. And then we get some details and a summary report.

    Joksan Flores • 05:22

    That’s perfect. So if I go to lifecycle manager now, I have three instances of those servers. It was empty before I had cleared out the list before starting the recording. And if you look at the last action was create at 318, which is when I’m recording this 112, 318, 39. And we got properties. So these instances all have data here that we’ve that we’ve uh saved. Now the agent has done all this stuff, right?

    Joksan Flores • 05:48

    The model itself models certain data. It’ll model the host name, the patch status, the updates, the prechecks passed or fail, and then the status. These messages come as some instructions and useful data like the OS, IP address, and so forth of the host, but it’ll do this for all of them. So we’re actually using the agent reasoning to capture data from the output and mapping it to the model in Lifecycle Manager. And the model is actually just modeled as a schema. So here we go back. If we go, you know, kind of look at the nitty-gritty details of the model.

    Joksan Flores • 06:22

    This is what the model is. It’s a JSON schema. And this is the data that we’re modeling, right? So we’re using the agent to reason through the data from the patch and mapping it to this data model. So now that we have this data accumulated for all these hosts, now we know there are two hosts, Kafka and MySQL that need patching. So now we’re gonna jump into agent number two. And the whole thought behind this is that we’re gonna break up this process, right?

    Joksan Flores • 06:51

    We’re gonna do evaluations. You could run this agent on a schedule every day at midnight or every week or something like that, and then figure out okay, we’re gonna keep my lifecycle manager table updated with the latest status and patches, and then I can run my patch agent you know once a week or something like that, right? So now we’re gonna go to the logic of the patch agent. So now what this agent will do is it’ll continue and tag on, right? Tag along on the information that the previous agent populated. It’ll go to LCM, get all the instances, extract them all, right? We’re already doing some cleanup, right?

    Joksan Flores • 07:28

    This is a tool that I’ve created, it’s a workflow. I’m doing some cleanup on some of the data. That way we keep the context you know small in the agent. We’re evaluate eligibility. The servers need to have patching true, right? It needs to have patching and ready to patch true. That means you know that there’s the pre checks are good and everything is good.

    Joksan Flores • 07:48

    Step three we’re gonna call the patch servers with an array of qualified server names. Then when we’re done, we’re gonna run the patch report, report on the results. And we’re gonna extract the reboot recommendations. Now this is the important thing and I’ve passed the piece that I have not run yet yet. So we’ll see what happens. I have not tested this yet because it’s very difficult sometimes to figure out what servers need reboot or has reboot recommendations. So we’ll see how that goes.

    Joksan Flores • 08:15

    We’re gonna create a change request if needed. And we’re gonna see that. So if there’s a reboot recommendation from the patch report, and this is all we’re you know, I got an Ansible playbook that does all this, you know, that does some of the patch pieces, and then I have a workflow that does you know some of the logic and stuff, etc. Right. So this is combining the power of Ansible plus it’s putting FlowAI on top using all those workflows as tools. So we’re gonna kind of combining all these things. So we’re gonna go create change requests if needed, and then we’re gonna go update Lysacca Manager with the status, send the notification.

    Joksan Flores • 08:46

    So I get a I should get a Slack here, and I’ll try to show the Slack at the end. Um, and then uh we’re done. So let’s go ahead and execute this thing and see what happens. Okay, so we’re running a patch server flow now. This patch server is super this is a very simple tool call. This is just calling an Ansible playbook. So we’ll see how long that takes.

    Joksan Flores • 09:12

    It’ll take a minute, so I may have to pause the video here and kind of jump to the end. But we’ll see. Okay, so our agent finished. So we’ll see. Let’s see what went on here. Okay, so we got a lot of tool calls, a lot of messages. The 1st two we know those are prompts, user prompt.

    Joksan Flores • 09:27

    I said evaluate all SCM instances. We do all that. We extract the data. Let’s see, evaluating eligibility. So effectively, the agent has identified there are two servers that need patching. My Kafka server and my SQL server. It called the tool to patch the qualified servers.

    Joksan Flores • 09:45

    It’s got a bunch of output. This is probably from where most of the context comes out. So I can do some cleaning up in here and actually cleaning some of that data that way. The agent doesn’t have to process all that. The patching completed successfully. Like we looked at agent number one. It’s gonna check the reboot recommendations.

    Joksan Flores • 10:06

    Ah, awesome. So it found the post patch report shows. Uh SELab MySQL up to date. Pending kernel upgrades requires a reboot. So SC Lab MySQL requires a reboot due to a kernel upgrade. Creating ServiceNow change request. It says that it called, okay.

    Joksan Flores • 10:22

    Create it. It’s just it’s doing a tool call. Uh for ServiceNow . I keep hiding these. There we go. 11. Uh it created the change request.

    Joksan Flores • 10:32

    This is a change request. That’s fantastic. And let’s see what else it did. So it went ahead and created, let’s see what else it did. Uh 11. Create the LCM instance. It did all that.

    Joksan Flores • 10:45

    It did all that. So the tool calls are being split here. And then it also sent a Slack notification, which I asked it to do. And that’s awesome. So let’s see. Linux patch server patching complete. Uh and this is it.

    Joksan Flores • 10:58

    Okay, so we got a slack notification. Linux patch, server patching complete, two servers patch successfully. SE LAP Kafka, pre patch status, all servers up to date. SE LAP MySQL requires a reboot. Change request 3750 created. Schedule reboot for the next step is schedule reboot for SELab MySQL to activate kernel upgrade. So this is fantastic.

    Joksan Flores • 11:21

    This is kind of what I want to see. This is what I wanted to see. And I didn’t very happy because I did not get to test um the change request creation before. I mean I had a workflow that I know works. So now we’re gonna go into ServiceNow , change table, L35760. Here we go. Reboot required for SE LAP MySQL after kernel patch.

    Joksan Flores • 11:41

    Let me go. Come on, ServiceNow . Okay, let’s go here. So let’s see. Reboot required. Uh impact low. Post patch analysis indicates that SE Lab MySQL requires a system reboot to activate new kernel version.

    Joksan Flores • 11:55

    The currently run of kernel is da da da da da. And let’s see if it puts some notes in here. Kernel upgrade completed on SE LabSQL, system reboots required to load the new kernel. They change request tracks depending reboot action. That is fantastic. That is uh um a fantastic outcome. So now I just demonstrate it very quickly, and this is without even leveraging a lot of the capability from the lifecycle manager.

    Joksan Flores • 12:19

    Mind you that I could actually have Have the trigger for each one of these and being tracked separately in lifecycle manager and also implement actions against them. I’m just using this as pure data modeling at the moment. But the one thing that I do get by just doing this, which is very very mundane use of this tool. I also get tracking of the status of each one of these. So we have the create and the update. One thing that notice here, this is all done by the agent, by the way.

    Joksan Flores • 12:49

    The the create was done by the agent number one, which is the 1st one that we executed. And the update is done by agent number two. Look at how we get that diff of those objects. So we get hey, the server wasn’t need spatching status nine, it’s an up to date. This is status warning. And uh upgrade kernel requires reboot. And if you look at Kafka, which doesn’t have that warning, it should be in up to date, ready state, patching completed successfully, seven patch supply pre updates remaining.

    Joksan Flores • 13:16

    So these messages it’s just an empty array that we have. We could have it be cumulative and say agent just don’t remodify that, just add on new messages to it. Just have it as a kind of an empty um, we have it for the agent right now to kind of populate whatever data at once. But this is pretty awesome. Um, pretty happy with the outcome here. Um, there’s a lot of stuff that can be done. I could have this be a multi stage process, right?

    Joksan Flores • 13:40

    At least we got two stages right now. We could do multiple, three, four, five, and just leveraging the tools that I have within the platform here, right? I’ve not stepped out of the platform to do anything aside from having that unsible playbook ready. Um, but you can kind of see the stuff that I can that I can do once um. once I get these things um kind of plugged together. Thanks for tuning in.