From Input to Execution: Automate Smarter with MCP + AI

Learn how to design, build, and execute powerful network workflows using Itential’s Model Context Protocol (MCP) and AI-driven automation. In this video, we walk through real examples of integrating AI into network operations, accelerating execution, and orchestrating complex workflows across your infrastructure. Whether you’re an automation engineer, NetOps lead, or just exploring the next evolution in network orchestration, this is your guide to going from input to execution with Itential.


🔧 Topics Covered:

  • What is MCP and how it works.
  • Building automation workflows in Itential.
  • Integrating AI into your network operations.
  • Live demo of execution from start to finish.
  • Best practices for workflow orchestration.
  • Demo Notes

    (So you can skip ahead, if you want.)

    00:00 Intro
    00:36 Itential Platform APIs
    02:12 Querying LLM for Exposed Platform Capabilities
    03:29 Asking LLM to Find Automations & Troubleshoot Router Ports
    05:27 LLM Uses Command Templates to Find Interfaces on Device
    07:26 Executing Commands & Gathering Context
    11:23 Recap

  • View Transcript

    Joksan Flores • 00:05

    Hi, everybody. My name is Joksan Flores. I’m a senior solutions engineer here at Itential. And today, I got a really cool thing that I’ve been playing around with that I wanted to show. So I’ve been playing and doing some training around AI tooling and agents and all those kinds of things. And the latest bus is MCP. So I have been building some code around to interface MCP with the Itential platform.

    Joksan Flores • 00:30

    Itential Platform has a lot of API endpoints that are very useful to expose all the platform functionality from discrete functions like running commands on devices, all the way to executing entire automation and orchestration jobs. Today we’re going to show a little bit of that. Our mission today is to have it trigger a job from our LLM here. I have a basic interface here. I’m using Client and Visual Studio Code, and I have plugged in my Anthropic Cloud SONNET model. I also have some MCP tools that I have pre-configured into that Cloud has access to that can expose different platform functionality. I also have my dev environment here.

    Joksan Flores • 01:20

    I have an API endpoint that I have built in the Itential Platform that does some basic router interface troubleshooting, or router port troubleshooting. One of the things that this has is, and the Itential Platform lets me do, is build dedicated endpoints for every single job that I have, and also specify the PostBuddy schema as a strict schema to launch this job. What this is going to allow me to do is, it’s going to allow the LLM to infer what the actual payload is to actually launch this job and launch it successfully. It can actually do a lot of very cool things with it. Let’s go ahead and jump into it. Let’s go ahead and ask it what workflows are available in the automation platform.

    Joksan Flores • 02:12

    Let’s hit that there. Let’s see. One of the things that you’ll notice is as soon as I go ahead and ask it a question, it already looks at all the tools that are available in the platform tool server, which is that MCP server, and it finds that there’s a bunch of functionality here. It’s got get trigger endpoints that shows the entry points for the workflows of the automations. It also sees that it has the ability to get the list of devices or the inventory that’s managed by the platform. It can get command templates, which are used for show commands, which will be useful for all of this, and it’ll also get an adapter’s health. Let’s go ahead and I have to pre-approve every single one of these. Let’s go ahead and go through that.

    Joksan Flores • 02:51

    The cool thing is that I can actually just give it the entire output and this will actually go ahead and figure out, hey, a lot of useful stuff from here. By doing the GetTriggerEndpoints alone, it’s already know that it found a bunch of useful stuff. It’s got manual triggers, it’s got API endpoint triggers, it’s got a bunch of those. It already found a lot of those and it has it in its context memory. Now, let’s go ahead and start asking interesting questions to see where we can get. Those workflows, is there one that we can use to troubleshoot? RouterPorts, there you go.

    Joksan Flores • 03:42

    Let’s see what happens. The one cool thing about client is you can actually go through the entire reasoning here. Once I asked you the question, I already identified two of all that. I think I found 25 or so. It has a router port troubleshooting workflow here, and it also has the XR port troubleshooting. Look at the coolest thing about all of this is based on what I have done with the API, and actually this one I didn’t do it, but it already figured out. The LLM can actually reason through what parameters are required to go ahead and trigger this job.

    Joksan Flores • 04:16

    We’re going to focus on this one right here. It says the workflow is designed to troubleshoot issues with the specific router ports. It requires the following parameters, device name, device interface, auto-remediate, whether we’re going to do that or not. You can execute it by using the API implant trigger with route name, router port troubleshooting. Awesome. Let’s go ahead and do that. Let’s do…

    Joksan Flores • 04:39

    Let’s launch the router or troubleshooting mode. But let me pick all the parameters. I need to tell it and a lot of times, the LLMs will make some assumptions. You can actually clean this up with some prompt engineering if you’re going to do an agent. But in this case, since we’re developing this, we have a clean LLM agent with no context memory, and it doesn’t have any prompt engineering. We’re going to go ahead and do it all from scratch. In this case, I had to tell it, hey, let’s go and launch the router or troubleshooting, but let me pick all the parameters.

    Joksan Flores • 05:19

    Now, based on that, it says, okay, I need to find out what devices to use, what device name parameter, how can I fulfill that parameter? I have a tool that says get devices, so let’s go ahead and launch that. Yeah, cool. Let’s go ahead and do that. The most fascinating thing about this whole thing is the fact that it can just reason through all that. I can say, hey, I need. These parameters which ones can I use or how can I go and figure that out, right? How can I go and fulfill that now? This is the coolest part It actually lets you do, you know selections in here. So in this case, it says Okay, select the device and it lets me pick now I don’t want to do this interface because that’s the management port. So I actually want to do another interface It says okay pick an interface or let me pick another one. So, let’s see Let’s I’m gonna specify and I know that I can do this for a fact. So let’s see if I want to use use device So let’s use this device which is the one that I want to focus on but let’s see But if we can find what interfaces are Available on it Let’s see what it does now

    Joksan Flores • 06:39

    That’s his reasoning through it. Now it’s figured out, the user wants to use this device, but we need to see what interfaces are available first. I can use the run command template tool to get information about the interfaces on this device. This is just extremely cool. It just knows the things that it can try to obtain more information. It did that, and this is actually the right thing that it has to do. It’s not pre-programmed, this is all.

    Joksan Flores • 07:03

    We started a new chat from scratch. And now it’s also figured out that there’s three command templates to have the show IP and brief command, which would show a brief summary of the interfaces. So now it’s gonna go ahead and let me know, or the second one, which also does the same thing. So I’m gonna go ahead and let it do that. This is gonna take a second, but this will actually give the LLM the entire output of the show IP and brief command. And now the LLM will have enough context in order to go ahead and trigger that job. Now, effectively, we got the response, which is the entire output from the device.

    Joksan Flores • 07:38

    It looks a little messy, but it can read it. So now it has a list of all the interfaces that are on the device. So, let’s see, which interface would you like to troubleshoot and should auto-remediation be enabled? So, I want, let’s see, let’s pick Gigabit Ethernet 1.172 and set to true or false so you can specify your own. So, let’s pick this one. I can just click on it. Okay, and now it’s figured out, okay, the user selected 1.72 and auto-remediation true, so now it knows that it already inferred that it can build the entire payload based on the information that it has, and this is exactly what I wanted to do.

    Joksan Flores • 08:25

    I wanted to run trigger endpoint, which that’s the right tool to launch an automation or an orchestration workflow. This is the actual route that it’s going to use for that, and this is the actual payload, so the device, the interface, and auto-remediate. Let’s go ahead and run that, and it did it. Let’s keep an eye on this, and let’s see what happens, and then we can actually go into the platform. It launched the Browder Per Troubleshooting workflow with all those parameters, and now it actually figured out that this is an async call into the platform. It captured the job ID so that it can come back and check the status of the job. Go ahead and did that after I approved it. Let’s see.

    Joksan Flores • 09:12

    My job is completed. There it goes. I ran the job, it completed it successfully. It says the troubleshooting process detected errors in the interface, and since auto-automation was enabled, the system took corrective action. Enter blah, blah, blah on device shows errors, Itential platform reset the interface. This is actually a message that comes from the workflow. It knew enough to go ahead and fetch all the information that it required in order to launch the automation that we wanted to launch. It also knew what parameters to capture, and this is the other cool part, is that it actually from the document, from this response that it got back from the API, it actually was able to document the job in a very summarized way.

    Joksan Flores • 09:56

    It says, workflow executed several steps. First, it ran the interface checks to identify issues, then it evaluated the results and determined remediation was needed. Since auto-remediation was enabled, set to true, it generated remediation commands. Finally, it executed those commands on the interface. The entire process took approximately 18 seconds, and the interface should not be functioning properly. The auto-remediation feature successfully addresses detected issues without requiring manual integration. This is one of those things that’s extremely cool.

    Joksan Flores • 10:28

    Just the fact that with a few prompts, we’re able to actually entirely launch an automation that we don’t necessarily know how to launch from scratch. If I were to set up a Postman payload, that takes me a minute to do. Also highlights the fact that by having these MCP tools access a properly structured API, it lets me add a lot of power on top of the LLM where there’s no hallucinations, no assumptions that can be made because the API is already documented. And also something that’s not entirely apparent here is that those accounts that I use to interact with that API have proper RBAC controls. So I can actually choose to expose certain automation and orchestration workflows while hiding others from the LLM agent. Alright, this is super awesome, this is very cool and we’ll keep playing with this and see what other things we can do. Thanks for tuning in.

Watch More Itential Demos