Most teams building AI agents end up with the same problem they had with scripts: one agent per task, sprawling fast, no shared pattern. In this demo, Solutions Engineer Joksan Flores walks through a different approach, build one FlowAgent with multiple personalities, governed by workflow.
Using a device health check as the example, Joksan shows how a single agent definition can run platform health, routing, interface, environmental, and security posture checks – each one driven by a templated system prompt that’s rendered at runtime based on operator input. Same agent. Five different jobs.
What You’ll See:
1. One Agent, Five Health Check Personas
- Trigger platform, routing, interface, environmental, or security posture checks from a single FlowAgent
- System prompt renders dynamically based on operator input, no separate agent per check type
- Cuts agent sprawl while keeping each execution focused and context-aware
2. Workflow-Triggered Agents with Operator Guardrails
- Manual entry point in the operator portal launches the workflow, which renders the prompt and invokes the agent
- Form-based inputs (device, severity, remediation mode) constrain the agent’s scope by design
- Same pattern works for API calls, scheduled runs, or triggers from external management tools
3. ReAct Agents with PyATS, MCP, and Templatized Reports
- PyATS tools handle show commands and dynamic test execution against live device state
- Native FlowAI tools and MCP integrations extend the agent’s reach to email, reporting, and downstream systems
- Output renders into branded, templatized health check reports – same format, different content per persona
Why This Approach Matters
Agent Sprawl Is the New Script Sprawl
The first wave of network automation produced thousands of one-off scripts that nobody could maintain. The same thing is now happening with AI agents – one for every variation of a task, each with its own prompt, its own tools, its own edge cases. Templated agent execution short-circuits that trajectory. One agent definition, parameterized at runtime, covers the use cases that would otherwise require five separate agents to maintain.
Reasoning Belongs With the Agent, Determinism with the Workflow
The agent reasons through health data and decides what to do. The workflow handles the deterministic parts – collecting input, rendering the prompt, calling the agent, sending the report. That separation is what makes agentic operations work at scale.
Video Notes
(So you can skip ahead, if you want.)
00:29 Agent Definition & Tools
02:13 Operator Portal as the Agent Entry Point
02:46 Workflow Rendering with Jinja Templating
05:02 Triggering a Platform Health Check from the Form
06:32 Mission Log: Watching the Agent Reason and Act
08:00 Email Reporting & Health Status Output
09:35 Triggering a Second Run: Interface Health Check
11:42 Solving the “Duplicate Agents” Question
14:19 Mixing Deterministic & Reasoning at ScaleView Transcript
Joksan Flores • 00:02
Hi everybody. Today I wanted to demonstrate kind of an interesting pattern for execution of agents in the Itential platform using FlowAI. So I’m actually looking at a device health check agent definition at the moment. One of the things that I kind of wanted to call attention out to the agent construction was the prompt piece. But let’s go ahead and walk through the entire thing. So we have our identity prompt provider set. Those are mostly background.
Joksan Flores • 00:29
We’re going to focus on the prompt in a 2nd . For tools, I have given it some PyATS commands, or some PyATS tools, rather. One for run show command and another one for run dynamic tests, right? We’re doing device health checks, so those are kind of appropriate tooling and we have some of the devices on board it. We could also use native toolings from the Attention platform. In this case, we’re using MCP for this particular use case. And I’ve also given it access to device health check reports, which is a workflow that we, or a project rather, that it gives us the capability of using templatized reports inside of the Attention platform.
Joksan Flores • 01:04
But I want to focus more around the prompt itself. Typically, when you define an agent, your prompt needs to have special attention to detail, especially in our case. We’re very focused on React agents, right? Agents that will reason and act. So typically you have a reasoning loop and you will have thought action, thought action patterns. And most of the time, the action is a tool call. In our case, our prompt is fairly empty here.
Joksan Flores • 01:31
So in my system prompt, I have use the following prompt. And prompt is encased by curly braces. Now, that is a convention that we use in Attention for variables. And I will explain the reason behind this in a 2nd . And then also user prompt just says execute the test required and send a test email. Now I have a tool already that’s defined for sending test emails and things like that. That will it’ll it’ll just use that tool now the other thing that I want to call attention to and not this particularly I want to open so let me go and find my device health check agent
Joksan Flores • 02:13
is the way that we are defining this agent. So this will be the entry point. And in my case, the entry point for my agent happens to be a workflow. And this is one of those things that I wanted to call attention to. Most of the demos that we have recorded and I have recorded personally that are out there, we are triggering agents from Flow AI. So that is a possibility, but obviously this is very much a builder type page, a builder type window. In our case, we will have most likely an entry point that looks like this.
Joksan Flores • 02:46
It’ll be an operator portal where you have some sort of either API entry point or a manual entry point or a schedule and so forth, right? So something like this. So in our case, we’re going to be using a manual entry point with a form. This will in turn launch a workflow. And that workflow will do a couple things. One of the things that it will do is it’ll be rendering a prompt. So I have actually defined agent context to be a prompt.
Joksan Flores • 03:17
And what that’ll do is it’s actually taking in four variables from that form. And if you notice, right, and you can probably kind of read through some of these things, it’ll lead you to the conclusion of what the use case is. We’re doing health checks on devices, right? But one of the things that we’re doing is we’re putting parameters around the health check type. Obviously, the device host name, the severity threshold, and the remediation mode. And if you look at down here, what renders is it’s an actual full-on system prompt for the agent. It starts by saying you’re an agent performing governed infrastructure health checks, task context, health check type, routing health.
Joksan Flores • 03:59
The device is blah. So I can do things like modifying this to say environmental. Let’s see, what is the variable? Let’s see. Environmental health. If I change this to environmental. It’ll have the health check type here, environmental health.
Joksan Flores • 04:18
And then we have variables in this Jinja template that will say, if the health check type is platform health, then these are the things that we care about. We care about CPU, we care about memory utilization, blah, blah, blah. We also have conditionals that say if the health check is a routing health check, we care about BGP, we care about the routing table health, and so forth. Now, the device that we’re using doesn’t have BGP, but you kind of get the idea. Security posture checks. So I actually have the ability of now by just doing this and by breaking up, yep, I will confirm. By having the ability of just rendering the prompt ahead of time, I have actually effectively now given my agent a bunch of personalities rather than just having one.
Joksan Flores • 05:02
So this is an interesting execution pattern that we can afford just because I have the ability to trigger agents via workflows and I also happen to have the capability of rendering templates beforehand. These prompts could actually come from anywhere, right? One other pattern that we could start thinking about is because the attention platform, we have the ability to, you know, kind of integrate with many, many, many things via workflow. I could launch these agents and pull the prompt from, say, a GitLab repo or an S3 bucket or something like that. And then we just call the agent with a template task prompt. If you look at here, context for that agent will be the output of render prompt. So let’s go ahead and trigger this agent.
Joksan Flores • 05:41
And I’m going to go ahead and flow AI and I’m going to go back to the missions window and let’s move it this way because I want to have it real time. And I like to do the split view thing. Where is it? This one. one there we go so we’re gonna go ahead and trigger this guy and we’re gonna do 1st let’s do platform health I’m gonna pick a device from my dropdown that all these are also dropdowns so from an entry point perspective I’m gonna keep this as warning and we’re gonna do report only from an entry point perspective I have the ability of restricting right on the platform how this automation or how this agent rather gets executed undefined right there’s no room for error And this is one of the things that we discuss with our customers quite a bit is also not all agents, right? This is not, we’re not building chat GPT. And as such, we’re not doing NLP functions, right?
Joksan Flores • 06:32
We’re not putting a chat window. Now, could you put a text area here and give the agent text, you know, instructions and text? Absolutely, you could. You can pass all this data to the agent as context via workflow if you wanted to or directly via entry point. So now that agent’s triggering there, that’s all going here. We’re going to go refresh that guy here. And I’m going to go and modify my size.
Joksan Flores • 06:58
Okay, so now that we trigger our agent here, let’s go back into Flow AI and look at the actual mission log to see what’s going on down here. So now my agent’s going through its checks and its reason that it needs to validate and it needs to execute some tests and send some of the device. So I’ll execute the platform health check systematically. Let me start by running both diagnostic commands simultaneously. So it’s doing its thing. It’s going to execute some tube calling. It might take a minute here.
Joksan Flores • 07:32
So I’m going to go let it do its thing and then we’ll come back and check the results. But I actually want to do this a couple times because I want to launch two different tests or two different executions of the same agent with that templatized prompt so that we understand. Actually, we can go back and look at this agent that got called here. So let’s look at this prompt. I got the agent performing governed infrastructure health checks and Itas contests you see here health check type platform health and if I expand this what it’s not getting it’s not gonna get uh truncate it it’s a platform health right now that’s what it’s doing and the target device is iOS csr AWS one so we have this kind of switching context in here and we’re gonna go trigger a next one and we’ll go ahead and kind of compare it let’s go ahead and let it do its thing okay so the agent has finished let’s go and see we got uh all steps completed successfully we got some parsed successfully commands ran critical you know some alarms and thresholds everything seems to be healthy and then we have an overall status healthy down here but we also have because we had that agent send let’s hide this conclusion here we had to do an email report so let’s go ahead and look at that email and let’s go back and see and some of the previous tests weren’t um some of the other devices that i was using for testing before this weren’t that healthy but this one seems to have a good status report that we can send so now we have a status report down here And I’m going to put it here to the side because we actually want to compare them one to the other.
Joksan Flores • 09:11
So we have metrics at a glance, CPU, low utilization, memory 11%, memory free, 2 gigs. And we got a detailed report of the device here. And we got a bunch of, you know, CPU, top CPU consuming processes. Everything is very low, very healthy. Obviously, this is a lab device. So not a whole lot of, you know, drama going on here. But the idea here is that we have the platform health.
Joksan Flores • 09:35
check capture so let’s go ahead back into here and i have already refreshed and let’s go back into missions and i’m going to go expand my window here again and now i want to check a let’s say let’s do an environmental let’s do interface a little bit more dynamic more dramatic this guy has a lot of sub interfaces that we use for provisioning so we’re going to go do that oh let’s do report only for now we’re not suggesting actions or anything like that so now we’re going to go ahead refresh here hide this a little bit While that refreshes, we’re going to go back and highlight. So this workflow already finished, right? Because it’s that SiR agent that we were executing before. And while that refreshes also, we’re going to go back into this one. So this is the 2nd execution that I did. And now we can see that your agent performing a government infrastructure health checks, same personality at the top.
Joksan Flores • 10:32
But we have now, we have an interface health check that’s being triggered on the same device. So we’re using the same entry point in our operations manager to accomplish two goals, right? And in our case, we actually have four. So we have more flexibility. We have interface. We have five things that we could do. Platform routing, security, environmental, all those.
Joksan Flores • 10:54
So you start thinking about when you have something like a NOC team or something like that, you can actually expand the range of an agent like this to help people in a position like the NOC or something like that to do their jobs. Whenever they see alarming going on from a management tool or something like that, they could come here and launch these agents. Or better off, maybe you could launch these health checks by yourself. Or maybe you could schedule some of these or they could be launched via API from a management tool. So that’s just kind of cool, I think, about this use case. And this is one of those that kind of lets you expand a little bit the range of something like Flow AI and what it can do. Because it’s not just, oh, I can trigger agents that only do one thing, but I can also start.
Joksan Flores • 11:42
And this is a question that comes often, comes up often for my customers. Like, how about I have agents that have duplicate roles and duplicate responsibilities? Can I go ahead and templatize that? I’m going to go ahead and actually put this in separate views. We can focus on Flow AI now that we’re done triggering things. And we’re just going to wait for this guy to finish. Okay, so now our 2nd agent execution has finished.
Joksan Flores • 12:08
We have, here we go. So we have the agent actually tried a few things. And because we’re doing a fairly dynamic thing here, it ran a couple commands that weren’t valid. So it’s pretty interesting. We could actually govern this via prompt so we can tell it what commands to run. But in this case, I kind of just let it rip. So those are good things, right?
Joksan Flores • 12:29
Lessons learned because it executed a couple invalid tools. And that’s, you know, this is waste of… you know execution time and tokens and so forth but it actually pulled up of a couple commands here and what we wanted to do so we were very focused on an interface he’ll check it did show interfaces show interface summary it found 62 interfaces not unheard of right on a big chassis in this case we just have a bunch of virtual interfaces we have 45 active interfaces and so forth and then we got some data and more importantly we got a report so i can go and expand my my my email report and i’m going to compare it side by side with the other one so we have let’s see Let’s line these up. We have device health check report, same device. This was a platform health. This was an interface health.
Joksan Flores • 13:17
Same agent, same trigger, same entry point, totally different use cases. And you look at totally different reports. The formatting and everything is very similar because I’ve instructed the agent to use the same format on that kind of report project. So I have the ability to actually templatize this. I could brand them if I wanted to. I could put logos. I could put whatever.
Joksan Flores • 13:37
I could make it more concise. But you kind of get the idea of now how you have multiple use cases off of this one single agent by just using a couple features from the attention platform, right? You know, the power of workflow, being able to trigger agents, being able to make some decisions beforehand, and doing a lot more stuff there. So you can start thinking about this pattern now. I can actually do a lot with it, right? I could execute a bunch of deterministic activities and then trigger an agent to go ahead and validate and summarize. If I wanted to do a health check on a device after I did some provisioning, I could do the provisioning entirely in a deterministic way and then trigger an agent after the fact.
Joksan Flores • 14:19
So this kind of opens up a lot of possibilities. Very, very, very useful in my opinion. Kind of an advanced use case, right? But I think it extremely useful, especially when you start looking at mixing and matching deterministic technology with reasoning and the power of flow AI agents. I think that covers everything that I wanted to cover today. And thanks for tuning in.