How Leading Utilities Are Orchestrating FAN Deployments for Long-Term Operational Intelligence

Utilities deploying Field Area Networks (FAN) are encountering the same challenges: complex onboarding workflows, fragmented data, and rising pressure to build an operational foundation that supports both grid resilience and AI-readiness.

This on-demand webinar will explore how forward-looking utilities are using automation and orchestration to simplify edge device onboarding, improve data consistency, and create the structured operational environment needed for advanced analytics and AI.


Joksan Flores and Karan Munalingal share lessons from programs like Southern California Edison’s, where orchestration is enabling:
  • Modular, policy-driven onboarding workflows across Day 0 through Day 2.
  • Automated SIM provisioning and integration with systems like SAP, MCDM, and ticketing platforms.
  • Device performance validation using parameters like RSRP, SINR, RSSI, and MCS.
  • Seamless coordination across IPAM, DNS, asset management, and lifecycle governance.
  • A future-ready data architecture that supports AI-driven insights and predictive operations.

If you’re scaling FAN deployments and tired of brittle, manual onboarding, watch this on-demand session to learn what “operationalized” looks like – end to end.

Why You Should Watch

This on-demand demo focuses on how Itential’s orchestration platform supports real-world FAN onboarding use cases, including:

  • Dynamic edge deployments
  • Config validation, config hardening, and software compliance
  • Integrations across legacy and modern systems
  • Demo Notes

    (So you can skip ahead, if you want.)

    01:34 Utilities Infrastructure Environment Overview
    05:00 Asset Onboarding Challenges
    08:42 Grid Modernization Objectives
    11:07 Itential Platform Introduction
    17:30 Automation Journey to Agentic Operations
    23:50 FAN Device Onboarding Demo
    37:33 Lifecycle Manager and Compliance
    39:11 AI Agent Integration Demo
    46:34 Wrap Up & Q&A

  • View Transcript

    Karan Munalingal • 00:00

    Good afternoon, everyone. Welcome to an Itential webinar that will be focusing on from Edge to AI, on how leading utilities are orchestrating fan deployments for long-term operational intelligence. Today, I’m joined by my friend Joksan Flores. Joksan, if you don’t mind introducing yourself 1st .

    Joksan Flores • 00:22

    Hey, Karan. Hi, everybody. Good afternoon. My name is Joksan Flores. I’m a principal at Siat Itential, and I work with various utility customers and various other customers from the industry, including tier one. So we’re kind of applying a lot of those lessons learned from the service provider world and bringing those to the industry because some of the overlap things. So, Karan, have a pleasure to be here with you.

    Karan Munalingal • 00:45

    Thanks, Joksan. And myself, I’m Karan Munalingal, SVP of AI strategy and innovation at Itential. So for today’s agenda, what we have in front of us is we’re going to talk through automation and AI for utilities, right? All the utilities company. We’re going to flow into Itential’s automation and orchestration strategy, followed by the platform architecture and a set of use cases that a lot of utility and energy organizations are adopting as part of their automation journey. And finally, we’ll lead ourselves into a demonstration by Joksan, and then we’ll end the webinar today on Q&As. So, with that, let’s talk a little bit about a typical utilities environment.

    Karan Munalingal • 01:34

    As we all know, with respect to these verticals providing a lot of capabilities to consumers, there’s a lot of infrastructure involved in order to keep it operating, right? And there’s two separate different types of infrastructure. If you look at the right-hand side, it’s a grid data center, right? This is where you have all the NERC-SIP applications, networking, et cetera. At the same time, you’ll also have your OT network that actually supports your substations and potentially your AMI 2.0 next-gen smart metering infrastructure, right? So, from our standpoint, not only are we looking at a vast variety of vendors that are involved in supporting utility networks, very similar to telecom networks, I would say, right? Because a lot of these utilities also manage their own backbone, MPLS, et cetera.

    Karan Munalingal • 02:27

    But at the same time, there’s a lot of infrastructure that is involved in hosting very secure applications within the grid that also might have to communicate with things within the fan, right? The field area network, which is going to be a focus of our webinar today. So, you know, one thing I would like to mention working with a lot of our utility customers. The number of infrastructure assets coming up in a utility environment continues to creep up. It’s because there is a need, like AI in the world, is driving a lot of requirements for these utilities companies in order to make sure that they sustain and are resilient and are also secure. So, with that, you can think about not only managing the brownfield networks that you currently have within your utility and telecom network, but at the same time, with the advent of all the new capabilities and new vendors that are providing more smart technologies, how do you securely bring those up in tandem with managing the existing assets? Right?

    Karan Munalingal • 03:32

    So, this is now naturally is going to lead to a conversation around how will utilities keep up with the explosion of network security infrastructure assets coming up in their environment. So let’s take a look at how networking and operations team actually work together in order to bring up a single asset. And I’m only talking about a single asset within their IT and the OT network. So, if you take a look at this particular slide, in order for you to bring up an asset full-on with day one configuration, et cetera, it actually flows through a lot of different individual team members, but also traverses a lot of different technologies. From the left to right, you’ll notice there’s data entry and retrieval involved by humans that might be filling out Excel spreadsheets or a form within an internal portal. At the same time, when it gets handed off to your network engineers, they are now having to resort to logging in to their existing technologies like IPAM system, security, vault systems, as well as sometimes creating configuration templates so they can push that down to these devices. But it not only ends there, because as part of this entire process, they are also regimented to actually follow a very strict change management process.

    Karan Munalingal • 05:00

    So, you can think about how every different team member that’s part of these different siloed vertical teams are actually requesting what they need to do next as part of their entire change manage process. So, imagine as a consumer having to onboard a device in a utility network, I might have to open not just one, but five different tickets for five different teams, right? So, just imagine having to do that for a lot of these devices coming up in the utility network at scale, right? We’re not just talking about 10, we’re talking about thousands of nodes that are about to come up to support AMI 2.0 initiative, grid modernization initiative. So, this is the current state, right? We all understand this is where we are, but absolutely, there is a path to move forward on how you can achieve more with less, especially when we’re talking about automation and orchestration. So with that, it’s not just about onboarding an asset in your network.

    Karan Munalingal • 06:03

    Once that device, a router, a fan edge device, et cetera, is onboarded in your network, IT or OT, Now, you have to take care of it until it’s decommissioned. So, what are some of these day two activities that a lot of our operations team actually focus on, right? Common things like patching and software upgrades, very common. You have to do that to make sure your infrastructure on the OT side is secure, but also within the grid, right? Because you have NERC and SIP requirements, NERC-SIP compliance requirements to meet. At the same time, you were talking about config drips.

    Karan Munalingal • 06:38

    This is a big part of where you run into issues during an audit when you don’t actually understand that parts of your network are unsecure because someone forgot some ghost config on some of these devices that came through a particular request. At the same time, let’s talk through some of the isolation techniques. So, if one part of the substation was attacked, now if you want to isolate that, these are some of the day two things you have to do in support of security that is at large within the utility organizations, right? And finally, if you talk through change management, every organization that gets audited always wants to have a trail of what was done within their infrastructure across the entire infrastructure. So, as I mentioned, right, it’s not just about easily onboarding something. It’s not a one-time activity. Once something has been onboarded within the OT network or the IT network, it’s important that you follow a similar stringent process of managing and maintaining those assets.

    Karan Munalingal • 07:43

    And these are all the activities that are required to do so. So, let’s talk a little bit about. You know, in order to do that in the most human way possible, do we just hire more? So, talking to some of the leaders at utility organizations, just hiring more is not the answer, right? So, the answer that they’re looking to is: how can we adopt automation more within the utility space to do more with the existing teams and experts that we have? So, if you look at some of the challenges and the objectives that they’re aiming to go solve for in the upcoming years, number one is grid modernization. There has been a mandate because, in order to support the pressure and the requirements that AI is about to put on every utility in this nation, it is imperative that you modernize the grid so it’s adaptable to those sort of requirements coming your way, both on the capacity, the volume, as well as security.

    Karan Munalingal • 08:42

    Doing so efficiently is another thing that every leader is looking at, right? It’s not like you can throw time and money because you know you don’t have unlimited of both of those things. So, if you take a look at how do we solve the problem that we have existing talent, but we have to manage brownfield and the greenfield networks to be stood up, how do we do that efficiently while lowering cost, right? And the 3rd piece, you know, that at the top, you talk about maintaining regulatory readiness. This is a big part for every utility organization, every energy organization, right? How do you make sure that the things that you’re doing today, the way that you’re operating today, not only matches the compliance standards, but as you start introducing automation, orchestration, newer tools, like even AI, how do you keep that conformance and be in compliance with NERCSEP, right? So, those are the top three.

    Karan Munalingal • 09:40

    But if you take a look at Accelerating operations. This has been a big one because you can’t, you don’t have enough time to manage and maintaining the explosion of devices that are coming your way, right? So, with respect to doing so in an accelerated fashion, instead of taking two weeks to onboard a device, can I now do that in one day efficiently with standards? That’s what everyone is chasing. And finally, embedding cybersecurity and security posture as part of your change management process, your auditing process, your onboarding process, but also your infrastructure network, like infrastructure change process, right? It’s very critical that security is not an afterthought.

    Karan Munalingal • 10:24

    The challenge is how do they shift left and actually bring that up front earlier? So you’re abiding by the standards being defined based on the compliance requirements that are being established. And the final thing is the biggest challenge that every leader is looking at. Starting the last 12 months, how do we safely introduce AI? And what do we need to do as operators of our infrastructure to actually support AI-driven initiatives in the upcoming near future? So, these are some of the challenges for utility organizations that they’re looking at and assessing how they’re going to solve that going forward. So, with that, you know, let’s introduce Itential.

    Karan Munalingal • 11:07

    So, Itential is the infrastructure orchestration platform for the AI era, right? Why is that? Because the way that we have built our platform is AI ready by design. It supports unified orchestration because of its core capabilities around integrating and being able to integrate into a lot of different tools securely, but bringing that centrally for various different teams to actually build automation and expose and consume automation. And finally, speed and scale. This is the most critical requirement for a lot of our utility leaders: they can’t wait any longer. So, it’s not a matter of, hey, do I have enough people to actually do the work?

    Karan Munalingal • 11:51

    Now they’re asking the question: do I have the right platform to do this more efficiently, but also with speed, higher velocity, and at scale. And so, let’s talk a little bit about why utilities choose Itential specifically, right? And a lot of our customers, both, you know, like especially in the utility division, are all on-premise, right? They have to abide by some of the rules being established by the compliance community where you can’t just distribute everything in the cloud. So, with that in mind, The platform approach becomes eminent, right? The way that you shift your operating model, right?

    Karan Munalingal • 12:32

    You’re working, you’re operating the way that you are today with a lot of humans involved in every single activity. But if you’re to shift to an automation and orchestration 1st mentality, you actually need a base foundational platform approach that enables you to do so very securely, right? So this is where Itential’s integration capability comes into picture. Let’s say, you know, the security team wants to actually create security automation. The platform can integrate into your existing security tools and scripts that you might have written. At the same time, you have your data center team wanting to do the same. And now let’s talk about the OT, right?

    Karan Munalingal • 13:10

    Let’s talk about the fan team that has to do things slightly differently than your usual router onboarding, right? There’s a lot of different proprietary systems involved within the utility organization that you might have to integrate into in order to effectively orchestrate these business processes for onboarding and day two activities. So think about the need for integration and how it becomes critical and central to a path forward around, you know, democratizing automation efforts within the utility organization. The next two pieces are very key, vendor agnostic, right? The reason why organizations choose Itential as a platform is because we’re not tied vertically to any vendor. Our goal is to make sure we can integrate effectively with any technologies out there to make sure that anybody can automate effectively for their own domain, but then help that wider at the top level where you can do cross-domain orchestration. And finally, you know, let’s talk about low-code and high-code teams.

    Karan Munalingal • 14:14

    There are certain utilities that have DevOps team that love writing Python script, Ansible Playbook. Our goal is to adopt and foster those capabilities as part of your orchestration play, not just siloed automation. So this is where you have your, you know, low-code team, like low-code orchestration team that know the business process can leverage existing assets that other team have already been creating within their own domain, right? And the last two are rapid time to value, which is key. And finally, Itential platform enables AI initiative. And we’ll shortly talk about how we do that. So how does automation and orchestration effectively lead to higher and better AI adoption?

    Karan Munalingal • 15:00

    If everybody knows here or doesn’t know, AI essentially is data driven. The better the data, the better its performance. The more the data, the better its performance. So if you take a look at how do we generate better data quality with right governance, this is where Itential’s capability in providing standardized automation and orchestration comes into picture. Number two, for AI, which is naturally non-deterministic when it reasons with LLM, it needs a deterministic arm. This is where a lot of our customers that choose Itential have the ability and control to create deterministic automation assets, whether it’s a script, it’s a workflow, it’s an orchestration by itself. Those could be tools for AI to actually make secure changes, more deterministic changes.

    Karan Munalingal • 15:49

    The next piece is validation and visibility. Let alone, we don’t let a lot of humans participate and make changes directly into the network. And now you have AI, like agents thinking for themselves, wanting to do the same. So validation before they make a change becomes important, but visibility into what they’re doing becomes even more important because you want that trail to make sure that you’re keeping track of what the agent is doing at all times, right? This comes back to the auditing capability where someone, an auditor walks in and says, hey, what did you guys do last week within the infrastructure? The agent works faster 24-7. And this is why visibility is important.

    Karan Munalingal • 16:31

    And the final piece is AI readiness, right? There’s with the introduction of MCP in our world, there’s a lot of folks wanting to improvise that as part of their infrastructure automation strategy. So if you take a look at how automation, which is deterministic, Prepares you for the AI world, like agent doing work as a digital coworker. This now becomes an essential part for you to foster and take care of while you’re adopting more AI initiatives internally. So, once you have all of these pieces of the puzzle at play, this will 100% lead to better AI adoption and more secure AI adoption. So let’s talk very quickly talk about the journey that a lot of our customers have taken from no automation now to agentic operation, which is a new actually facet and a panel that everyone is chasing, right?

    Karan Munalingal • 17:30

    A lot of our utility customers today went from limited automation to task automation very quickly because there’s a lot of community tools out there to help them do that. But now when they started preparing for process orchestration, that’s where Itential came into the picture because we enable them to leverage their existing automation and blow that out to actually perform orchestrations, which then led to self-service capability for your operations team, your engineering team, your security team. And finally, when we’re talking about agentic operation as one of the net new goals with the advent of AI, how do you change your operating model where you’re working side by side with agents, right? This is where you take a more outcome-driven approach versus a more static-driven approach, right? Your environments are always changing in the OT world. Can an agent keep up with those changes and suggest remediation path? This is where we’re thinking about how do we support our customers in the next part of their journey, going from self-serve infrastructure to agentic operations across the board.

    Karan Munalingal • 18:39

    So let’s briefly talk about the platform itself, the stack, right? This is Itentril is a platform for agentic infrastructure operations. What does that mean? If you look bottoms up, the entire goal here is to effectively integrate into any stack. We call this instrumentation. It becomes imperative and very important that we’re able to instrument any change in the network securely with good logs and auditing, right? Now you build upon that.

    Karan Munalingal • 19:09

    Once you have the instrumentation layer, now you have the platform providing the deterministic capabilities. What are those? Configuration and validation, right? Compliance becomes a big thing. Engineering teams get to define their standards. Now those become standards not only for humans, but machines and agents. Then you’re talking about things like lifecycle.

    Karan Munalingal • 19:30

    Instead of having a very arduous, long-running orchestration or a script, now you have the ability to stage your operations like you would as a human and actually run that through an orchestration path. This is where remembering things that an automation does in your network becomes very key because now you can act off of those things that will remember on your behalf. And finally, if you take a look at the reasoning layer, this is where we pair with AI intelligence and the deterministic capability within the platform to drive agentic operation, right? This is a different way of operating your infrastructure from what you did years ago, right? This is where you start building trust in your AI strategy while also keeping a lot of control on what agents can do. And finally, Northbound, if you take a look at consumers, right? It’s very key that anything that you build, whether it’s an automation orchestration agent.

    Karan Munalingal • 20:32

    If you’re not able to expose that securely and in a simplified manner to your consumers, you’re not going to get value out of it. So, from our perspective, we put a lot of effort into making sure that whether you’re invoking automations through Salesforce, ServiceNow, Remedy, or through a pipeline like GitLab, et cetera, that line of business teams have, or it might be your operators or agents, right? Think about your ops team within the OT side who are able to go into Chat GPT or what have you and use natural language to inquire about particular substation, right? Or might request a change to say, hey, can I please make that VLAN change in this particular data center? Because that’s where I need access, right? So you can think about the opportunity that we have to accelerate automation, but also the adoption of it up top. So, I’ll quickly talk about the automation paradigm in the AI era.

    Karan Munalingal • 21:36

    You know, thinking about modularity becomes a very critical part of this, right? The more things that a platform integrates into, the more experts that jump into the platform can now bring their subject matter expertise to build their portions of the entire puzzle, right? So, think about Johnson as a security expert can come in and build very specific automation for something like Palo Alto, right? And there might be another expert who’s an IPAM expert that might do the same thing for InfoBlocks and same for ServiceNow, right? So, the thought process here is: how do we bring more experts into the platform so they can define their own expertise in an automation as an automation? But once you have all these Lego blocks built together, you can build something like this. So, in this particular example, you’ll notice there’s a lot of different blocks doing different things, potentially created by different team members.

    Karan Munalingal • 22:36

    But the thought process here is they have actually transcribed their mop that might be in their head or on paper into an actual asset that could be automated and executed. So this is where you have the flexibility of quickly, when we talk about rapid time to value, you can quickly put these Lego blocks together to solve different business challenges, right? One day it’s creating a firewall security rule with change management. And another could be standing up a brand new data center. And the 3rd could be onboarding a fan device in your fan network, right? Like an edge device. So, you know, with that, you know, just kind of leaving a thought here where if you’re thinking about automation and orchestration, think about which teams can participate, which technologies could be built upon and then brought into the platform like Atential to actually support more teams and more experts to contribute so you can go faster.

    Karan Munalingal • 23:39

    So, with that, I would love to hand it off to Joksan, and he’s going to talk a little bit about what he’s going to demonstrate today as part of the webinar, and then we’ll go from there. Joksan, over to you.

    Joksan Flores • 23:50

    Thank you, Karan. One of the things that’s super important to kind of discuss and focus on here is as we go along throughout the process, we’re going to be showcasing a lot of that modularity that Karan was talking about throughout his discussion. We face a lot of times when we talk to customers, we go through some of these demonstrations and talk about, okay, I have this system, I have that system, I don’t have that other system. Karan talked about this a little bit. We focus a lot on providing integration and capabilities that lets us integrate with many, many, many systems. Karan also discussed how in the fan world, in the OT world, there’s a lot of proprietary solutions. So nowadays, we’re having field engineers having to learn how to deal with various different products.

    Joksan Flores • 24:35

    We try to commoditize and make that process super easy. Today, we’re going to be very focused. We have partnered with Beck. in order to provide us a platform that they use in the SaaS environment that manages their OT gateways. But this is actually a process that’s very universal, right? This could work with Beck. It could work for any IoT vendor out there, whether the devices have an API or they are being managed by using CLI, whether they are centralized and being zero touch provisioned by a controller, or we’re actually having a tech logging in and provisioning it and onboarding it into the grid network.

    Joksan Flores • 25:08

    The whole idea of this demo will be that we’re going to take a device that has been onboarded into the Beck platform. We are only doing one device today, but this could be many, many, many devices. Karan talked about the challenges that our utility customers are facing today is in the scale of hundreds and thousands. And one of the things that I hear constantly is: I have to onboard this device not just to the platform that manages it, but I also have to keep it onboarded to my sources of truth. One of the things that becomes a challenge is the device is on my Grid network, but now I got to ensure that that device is compliant. That it passes the NERC standards, that I can report against it, but that I’m also monitoring it, right?

    Joksan Flores • 25:53

    You know, one of the things we hear, not from just utilities, right? Many customers is, oh, I found out my device has an outage and come to find out the device wasn’t actually onboarded into my management platform. Karan, this is a thing you probably hear a lot too, right? 800%. Yeah, super common. So, what we’re going to do today is we’re going to focus on onboarding a device. So, we’re going to be reading data from Beck Central, extracting data from those devices.

    Joksan Flores • 26:19

    We’re going to then onboard the device into various other systems that are, you know, kind of emulating what a standard customer environment would be. In our case, we’re using ServiceNow CMDB. We’re using Netbox and we’re using Zabbix. Keep in mind that this is just the kind of a standard universal thing. And we could do this across many other systems. It doesn’t have to be these three. It could be Nautobot, it could be InfraHub.

    Joksan Flores • 26:42

    It could be any other system out there that you have. Then, what we’re going to do, one of the challenges that our customer faces, immediately after that device gets onboarded, they want to make sure that they validate the hardened configuration of that device, right? We talked about those NERC SIP standards, the SNMP config, the NTP. We have to have certain packet filters, ACLs applied, all those kinds of things that we want to have in the device. So, we’re going to actually go and do that process all from the get-go in one run. And if we find violations, we’re actually going to go ahead and apply a hardening config to that device. Now, we’re going to step out and kind of look at, okay, now once that we have onboarded that device, we’re actually going to have the capability in Atential.

    Joksan Flores • 27:26

    And Karan talked about the ability of having a lifecycle into that device, right? The life of that device doesn’t stop there. Now we got to do the care and feeding because it’s part of, you know, I hear this all the time too. Now the device is part of my grid. I got to make sure it’s protected because that device is in a substation somewhere. or it’s somewhere else. I got to make sure that nobody can or we can apply the standards so that we minimize the risk profile of that device because it could potentially give access to a bad actor to the rest of my environment.

    Joksan Flores • 27:56

    So we actually have to do the care and feeding. Once we do that, we’re going to implement a series of day two actions using Atential Lifecycle Manager. Today I want to focus around having the operational capability of evaluating things like software compliance and also config compliance. So I’m actually going to stop along the way and I’m going to inject back config on the device again. So we’re going to go ahead and stop that and do that live. So even though we are onboarding the device and checking and applying hardening config, I’m actually going to stop along the way, inject some bad config, and then we’re going to go back and check and actually also do some reporting because we want to talk about those requirements that NERCSIP provides and other standards, right? Other compliance and regulatory things.

    Joksan Flores • 28:43

    We want to make sure that we provide the capability for you to report on demand, report on schedules, and automate that reporting, right? We hear this a lot. A lot of our customers do reporting manually. So that’s probably not the best thing that we want to do. Okay, so we are here in the Attention platform and I am on the Operations Manager portal. And what we’re going to do is this is pretty simple. We’re actually going to go ahead and execute this orchestration job and this will actually start the onboarding process into the various systems that we talked before.

    Joksan Flores • 29:13

    But the one thing that I will show 1st is I want to go into Beck Central real quick, which we have used. Like I said, we have partnered with Beck to provide us in an environment. And I have a device that I have onboarded here. This device has a series of parameters and so forth. There is config things that I can look at. I can go here and look at all the stuff that the device is doing. But imagine having to do this manually for hundreds and thousands of nodes, right?

    Joksan Flores • 29:36

    It’s something that’s not very, very easy to do or scales really well. Karan talked about this earlier as well, right? Adding more bodies to this, throwing more bodies at this is not going to solve that problem. So we have managed to, hey, let’s go ahead and showcase how we can actually orchestrate this process and make sure that it’s done across the board. So I’m going to go ahead and launch that. This would be a standard operator. We’ll come in here.

    Joksan Flores • 29:58

    We’d find the fan device onboarding. And it’s a manual triggering here. I could trigger this via API or via schedule, but I’m just going to go ahead and trigger it manually for now. Now, this happens really quickly. Everything that we’re interacting with is with APIs, but essentially what we’re doing through what we’re going through and doing is we’re going to go through the entire list of devices. We’re going to fetch all the gateways that are provisioned in Beck Central. We’re going to do some data manipulation like we talked about earlier, some data extraction.

    Joksan Flores • 30:29

    And then for each gateway, we’re going to do a few things, right? We’re going to do a couple operations. We’re going to onboard each device one by one. Onboarding means we’re going to provision it in Lifecycle Manager. If you look at the check marks, these tasks have already all happened. No error. So we didn’t see this getting triggered here.

    Joksan Flores • 30:47

    When we trigger the device in Lifecycle Manager, and I will show what that looks like in a 2nd , we’re actually doing a few things. This is where we actually go and onboard the device into ServiceNow, Netbox, and also Zabbix. So we want to make sure that it’s onboarded into all those three systems. And we actually validate across the way. If the device has already been onboarded, then we just ignore it and continue moving on. So this is something that we can execute as a recurring operation, or operators can just trigger it to do it on demand. Then, when that’s done, we’re going to actually walk through all the devices again, and we’re going to walk through each, every single one device that has been onboarded into the system, and we’re going to go ahead and check the hardening config, making sure that it’s compliant.

    Joksan Flores • 31:36

    If it’s not compliant, then we’re going to go ahead and apply a hardened config. One of the things that we have done is we have designed a configuration compliance for this device type in our config in our golden config application. This application actually gives us the ability to design not only golden config for CLI devices that are managed via SSH or Telnet, but also have the ability to design config compliance for devices that are JSON-driven. And I can ignore certain fields that are device-specific, but I can also do things like enforcing things like SNMP, packet filters or ACLs, NTP, radius, tax values, any of those things. And I can manage those devices as an endpoint here. So, this is the config example. I’m not going to scroll through the whole thing.

    Joksan Flores • 32:26

    It’s pretty big, but it kind of showcases the idea that I’m able to enforce a certain standard on those devices. So, let’s go back into here. And now that we finish explaining this flow, let’s go back and look at what it did in our Lifecycle Manager. So, this is the Lifecycle Manager application. And essentially, what happens is here, we have various instances of these devices. In the circumstance where I would have multiple, so imagine hundreds of these, they would all show here as a row on each. When I go and trigger and select this device, it’ll show me all the stuff that I have associated with that resource.

    Joksan Flores • 33:04

    Actions, properties, and history, very important. So, I can actually go and look at the history, e.g. . If I were to go and look here and actually see the process of onboarding and all the data that we captured, and more importantly, also, I got an entire trace and audit log of the workflow that was launched to onboard the device, like I mentioned before. We added the device to ServiceNow, added it to Netbox, added it to Zabbix, and then we saved that instance in Lifecycle Manager. Going back to LifeCycle Manager, I can look at those properties like we looked at earlier in that audit log. I have certain properties that I have remembered from this device. So, going through here, I have modeled certain things that I want to remember from this device.

    Joksan Flores • 33:49

    I could model lots and lots of things. I’ve decided to only model certain attributes of the device. More importantly, because there are systems that already have a lot of this information, right? Beck becomes the source of truth. But one of the things that I do want to do here is actually tie the device parameters from Beck’s central or from my OT gateway to also the other things that I have done with onboarding. So, I remember the device ID, which is super important for performing the two operations: MAC address, management IP, serial numbers, device name, customer ID, et cetera.

    Joksan Flores • 34:23

    But then also having things like the Netbox ID. So, Netbox becomes my source of truth for that device. It’s been onboarded. I show it here. It’s device 148, and it shows my URL for the device. So, let’s go and check Netbox really quick. And I can see here that my intentional OT gateway has been onboarded into Netbox.

    Joksan Flores • 34:46

    Going back into Lifecycle Manager again, I also have ServiceNow as a system where I want to onboard my device just so that I can keep device serial numbers, keep track of RMA processes and lifecycle of the device hardware and things like that. The device has been synced and it’s onboarded into ServiceNow. So if I were to go into ServiceNow and refresh here, I have my intentional OT gateway onboarded here as well with the vendor and all its attributes. And then also, in Zabbix, I have onboarded my device on Zabbix. It has a host ID down here. So it’s been onboarded to Zabbix. Where is it?

    Joksan Flores • 35:23

    Right here with this management IP and everything. One of the other things that I have modeled here has been some of the aspects of the day two operations that I was kind of discussing about, mainly the things that we want to focus on from a demonstration perspective. I got software and framework compliance. They are unknown right now. That is on purpose by design. We’re actually going to go ahead and trigger those by hand manually here just to showcase how we can do it here on the platform. We could also do this via operator portal.

    Joksan Flores • 35:52

    And then later on, we’re going to do it using AI. And then we also have the configuration compliance capability. So what I’m going to do is I’m going to go ahead and start the configuration compliance process. We do know that when we onboarded the device, we actually executed config compliance and found some issues, and we actually applied remediation. So my device should show 100% compliant. Let’s go ahead and validate that. I have showed my history that my config compliance happened very quickly.

    Joksan Flores • 36:21

    This is an API-driven device, so the compliance runs extremely quickly. You can imagine that when you start doing these things in volume, it will take a little longer. But fortunately, here we’re good. Okay, so I have this device. I have all the config drift detected. What I am going to do is I’m actually going to go ahead and apply hardened config back again based on some of the things that we had done before by injecting some of that non-compliant config on that device. So I’m going to go ahead and apply that.

    Joksan Flores • 36:52

    And then we should be good to go. One of the things that’s super important to call out actually in Life Cycle Manager, and we will actually move into the do AI portion for a little bit of this after. But I do want to call out that all the changes and all those actions that I have taken are remembered here. I have actually not made any changes to the device model, but I’m actually tracking every single one of them. So if I go and look at the job here, like I showed, and during onboarding as well, I am tracking every single change that I make into that environment and that device. And this happens on a per-device basis. So all those day two capabilities that I offer, right?

    Joksan Flores • 37:33

    All these things like updated target software version, which we’ll look at doing with AI. We’ll look at, you know, with Look talked about value config compliance. We can also implement things like block endpoint. Which implements traffic filters, validating all the services and stuff. All those things that I have in here available as actions are available as day two because I remember all those attributes from that device. But also, I have that audit logging available as well. Okay, so now that we have talked about the capability that’s offered in the Attention platform, let’s talk about how we expose this into a secure AI framework.

    Joksan Flores • 38:12

    One of the things that’s very important that Karan had mentioned was the AI world is non-deterministic by nature, right? An AI agent with the advantages that an LLM provides, it’ll actually reason upon a problem with the context it has and come up with a slightly different solution every single time. So, what we want to make sure is that we expose and provide the very deterministic arm to that platform or to the AI agents. In here, I have cloud desktop in which I have prepared an agent. And this agent has a series of tools that expose itential functionality via itential MCPE, as well as some prompts that I have created that are very specific for my fan device lifecycle management. So, what I will do is I will put some prompts in here that will provide me the capability of identifying those devices and running compliance on and so forth. So, let’s go ahead and do that.

    Joksan Flores • 39:11

    And what I’m going to do is, I’m actually going to go ahead and run compliance, run config compliance, and send me a compliance report. As one of the things that I have provided as an action available as a day-to-day action that is super important to keep up with those regulatory constraints and regulatory demands that we have, having the capability of creating reports on the compliance of that device is very critical. So, we’re actually going to go ahead and we have had in the background, we have some exceptions created into the configuration just so that we make sure that we have a super compelling report. So, you see here that now the agent is reasoning through the fan devices and it’s making a series of requests to the platform. It’s getting all the resources to describe and finding the capability that the platform offers. It also uses the get instances to make sure that it gets all the fan devices onboarded into that resource. And then it says that it found one device, which is what it should find, right?

    Joksan Flores • 40:14

    If we have multiple, then it would find multiple. And then it’s also going to go ahead and check in compliance. It says the last compliance check shows a drift of 99%, which is great because that’s what we want to find, right? We actually want to have some data to show on that report. So it went ahead and ran the compliance again and it sees that there is still drift on the device. And that’s because we want to make sure that we have a report that we can show to you all. So we have actually gone ahead and executed the compliance.

    Joksan Flores • 40:43

    We get a summary of the device inventory saying the Attention OT gateway with this MAC address and this IP has this data with it. It’s got drift detected, 99% compliance. Okay, so now that we have identified that we have drift on the device, we’re going to go ahead and ask it for a compliance report for the attention OT gateway. It’s going to go ahead and schedule that and say, I’ll send configuration compliance report for the attention OT gateway. Being that now my agent has context and it knows about all those day two actions that I can execute on that device, it’s able to send the report to me. So it says configuration compliance report sent successfully. We know that the device had a 99% score earlier from those back commands that we sent.

    Joksan Flores • 41:32

    Okay, so now that my report has been sent successfully, we see that we have a compliance score of 99 out of 100. I’m going to go to my email really quick and then find that report. And here we go. So I have the ability to now, as part of those day two actions of sending reports to myself on demand or to my team, I’d have included multiple people. I have Karn and I have other people in here as well. And it shows that my device is 99.2. Now, the agent had the right intuition by saying, hey, it doesn’t matter if it’s 99.2 or 99 and there’s a 1% missing, that 1% could be critical parameters of the device that we want to worry about.

    Joksan Flores • 42:11

    And true and true, right? We actually have security configuration and system configuration issues that are a huge priority, right? Things like sample SNMP, right? So I’m missing my SNMP itential RO read-only community. I’m also missing one of my trap servers. I am missing some IP filters and so forth. And this is something that we could do extensively throughout the config just to make sure that we’re complying to those security frameworks that we have mandates to deal with.

    Joksan Flores • 42:41

    Now that we did that, I also want to talk about software compliant checks a little bit. We’re going to go ahead and set the software standard to 1001, and we’re going to go ahead and execute a software compliance check on that device. Notice that this capability we’re actually leveraging again that Beck Central API, but this is something that we do for lots of device types, right? Tens and hundreds of these things. So we can do this at a CI level, a CLI level, or we can do it at an API level as well on these devices. My agent’s going and reasoning through this right now. It’s completing my software compliance check.

    Joksan Flores • 43:21

    You can notice here that I set my software standard to 100.1211.221, and the device is actually running 1001211. So it’s actually not compliant and it requires an upgrade. One of the things that I could do down the road is I actually could come back and actually build an action into my lifecycle manager to actually perhaps schedule the maintenance window, stage the firmware upgrade, perhaps schedule it, and then after the fact, come back and check the compliance and the configuration and so forth and do all sorts of other day-to-activities. Okay, awesome. So after showcasing all that, we want to talk about a little bit about how we support our utility customers, right? Having the capability to offer multi-domain orchestration definitely is something that’s a super important pillar for us to drive and support modernization across legacy systems and also next gen, right? We talked about how we can do these things at a CLI level, also API, mixed environments, definitely do all that, all those types of things.

    Joksan Flores • 44:30

    We want to do these things at scale. So we definitely want to focus on the big demands that the AI world is bringing into our utility companies to drive up scale and lower the maintenance costs while keeping perhaps like Karen said earlier, not necessarily throwing bodies at the problem, but actually handling these problems more effectively. Resiliency and downtime are super important things as well. Things having the ability to validate constantly in an automated fashion, the ability to check software compliance and configuration compliance, and also validate the operational health of a particular device or series of nodes is something that we have a lot of capability to do on at scale, just so that we make sure that we minimize downtime and we minimize the time that techs have to spend troubleshooting various environments and so forth. The reporting capability is super key, right? We kind of showcase some of the things that you can do with the platform. All those things are super customizable in order to make sure that you meet your secular regulatory requirements.

    Joksan Flores • 45:35

    So, one of those things like reporting on a single device or a single market or a single section of the network is super important for us. And mainly, you know, any of the things that we also demonstrated is make sure that we’re prepared and we can inject our deterministic flows and our deterministic capability and rich integration into the AI-driven world when you’re ready. This is something that’s on your terms. You can take action today, build a lot of these orchestrations, integrate the systems into the platform, and pace in, right, or start injecting AI into the process to augment what you did. All the stuff that I did today was mainly read-only. You can improve a lot of that stuff. This is what we see with our customers.

    Joksan Flores • 46:21

    They start with read-only activities, and then they move on to start applying remediations, but also in a controlled way by executing deterministic workflows. Karan, any last comments?

    Karan Munalingal • 46:34

    No, I think we do have some questions in the panel. If you can address that, Joxon, the 1st one is interesting, right? Based on the question is, you just showcased a demonstration specific to an OT network, like a fan engine device that came on, you onboarded. Is this platform only made for the OT networks or it’s also applicable to the usual IT and data centers?

    Joksan Flores • 47:00

    No, we’re effectively, we’re actually deployed in multiple environments where we’re addressing various use cases because we’re multi-vendor and we provide a lot of generic orchestration capability. We can actually be deployed in IT environments managing the traditional networking and infrastructure and also in OT environments. This is one of those things that our customers find very attractive, right? Recently, I was talking to an energy customer and they thought we were OT specific just because of some of the events that we were kind of attending. And we said, no, this is actually generic. So they said, oh, we’re actually going to bring IT counterparts just so that we can talk about the capability we can offer all together.

    Karan Munalingal • 47:38

    Awesome. And the 2nd one that came through, and I know you might have addressed it as part of the demonstration, is in the example, in the demonstration, you integrate it with Netbox, which is fairly popular, ServiceNow Zabbix, but in their utility networks and in their environment, they have a lot of proprietary tools that might have REST API or SOAP API. So the question is: will this work with other systems that were not identified in the demonstration or in that picture during the presentation?

    Joksan Flores • 48:17

    Most definitely. Even in the big diagram with all the logos, right, there are certain things that we’re not able to cover there because we don’t have the logos or we have internal systems. We’re definitely able to integrate with all various systems. We have a very rich integration framework for anything that has APIs. If it’s a SOAP API, we can look at our adapter framework that supports a SOAP framework as well. And then also, we have customers that require scripts to write into databases, and we can also inject those into the platform and expose them as services. So no system is out of reach.

    Karan Munalingal • 48:51

    Excellent. Thank you, Joxon, for those responses. I think we’re at time for this webinar today. So I appreciate everyone tuning in for what we covered today. You know, from our standpoint, our goal is to help all utility organizations, you know, move across the journey from no automation to agentic operation. And hopefully the, you know, the proof was in the pudding that. you know, Johnson was able to demonstrate.

    Karan Munalingal • 49:16

    But we definitely have a lot of existing utility customers leveraging our capability at scale, both on the OT and the IT side to meet their business objectives. So with that, we’re going to end this webinar for today, and hopefully everyone has a great day. Thank you. Thanks, Joksan. Thanks, everybody, for tuning in.