Close the Loop: Lumen’s Journey to Safe Autonomous Network Operations

Lumen’s journey toward safe autonomous networking shows what happens when evidence-based AIOps is paired with disciplined, governed orchestration. By clearly separating decision-making from execution, Lumen dramatically reduced alert noise, accelerated root cause analysis, and enabled audited actions that can safely run machine to machine. The result is a practical, measurable path to closed-loop operations and steady progress toward Lumen’s North Star of 80 percent machine-to-machine interaction.

This webinar builds on the published Lumen case study and extends the conversation to what has changed since. Speakers discuss how Lumen, in partnership with Itential and Selector work together to tighten and close control loops, how autonomy is earned through evidence and guardrails, and how emerging patterns such as MCP-enabled, role-aware agents fit into a safe operating model rather than bypassing it.

What We Discuss

  • How Lumen reduced over a billion noisy events into explainable, actionable signals
  • Why clear swim lanes between AIOps and orchestration are critical for safe autonomy
  • How Lumen measures closed-loop outcomes in business and operational terms
  • What “earned autonomy” looks like in practice, including where humans stay in the loop
  • What has changed since the case study, including MCP, agentic patterns, and trusted tool calling
  • Practical lessons operators can apply on their own path to safe autonomous networking

Why You Should Watch

  • Learn how Lumen turns AI-driven insights into trusted, auditable action
  • See how separating decisions from execution builds confidence and control
  • Understand how autonomy is earned with evidence, guardrails, and measurement
  • Explore where MCP and agent-based models fit in a safe, operator-led approach
  • Demo Notes

    (So you can skip ahead, if you want.)

    01:39 Lumen Network Infrastructure Overview
    05:08 Automation Journey Foundation
    10:22 Workflow Development & ROI
    12:04 Safe Autonomous Implementation Strategy
    20:01 AI Generative Technology Overview
    25:22 MCP Protocol & Observability
    36:39 Agent Builder Development
    45:08 Trusted Determinism Discussion
    53:18 Wrap Up & Future Vision

  • View Transcript

    Scott Robohn • 00:00

    Hello, and welcome to today’s webinar, Close the Loop: Lumen’s Journey to Safe Autonomous Network Operations, sponsored by Itential and Selector. I’m Scott Robohn with Solutional, your moderator for today. We have an excellent panel with us today to discuss Lumen’s journey towards safe autonomous networking, and we’ll cover what happens when evidence-based AIOps is paired with disciplined, governed orchestration. Before we dive in, let me introduce our excellent panelists with us today. We have Greg Freeman, Lumen VP of Network and Customer Transformation. Selector VP of Data Engineering, John Capabianco, Itential Head of AI and Developer Relations, and Karan Munalingal, Itential SVP of AI Strategy and Innovation.

    Scott Robohn • 00:49

    Here’s how we’ll proceed today. We’ll take some time for Greg to summarize what he and his team have done over the past several years within the Lumen network. Then we’ll take some time for updates on what’s happened in the six to nine months since the production of the case study on Lumen’s transformation, a transformation journey to date. That’s going to lead us into what the future holds for Lumen and for Selector and Itential, both as partners with Lumen, as technology providers and partners, and broader players in the ecosystem of skyrocketing AIOps opportunities and activity. So, all that being said, Greg, welcome. Why don’t you start off and tell us a little bit about Lumen Technologies? You get to brag about your network infrastructure.

    Scott Robohn • 01:38

    Don’t hold back.

    Greg Freeman • 01:39

    Don’t hold back. No, I think, Scott, appreciate it. And just so good to be with you and this distinguished panel today. So, I am here at Lumen Technologies, and Lumen is a technology company with deep roots and telecom. And so, with that, we started out about six years ago in 2020 on a journey. We wanted to pivot to more automation and more machine-to-machine interaction with the Lumen network. Lumen has a number of networks.

    Greg Freeman • 02:04

    We have a number of acquisitions, and there’s quite a bit of complexity in the system. But Lumen is the number one connected autonomous system on the planet. You can Google what’s the world’s largest ASN, and you’ll see that AS3356 is the most interconnected. We have several hundred thousand long-haul fiber miles in the ground, and we’ve committed to putting in millions more if you count all the fiber in the metro and long-haul. So, we have millions of miles of fiber. We have several hundred thousand buildings that are on net for commercial customers to develop. And we still have a good amount of residential customers in the CenturyLink.

    Greg Freeman • 02:43

    So, Lumen is a large-scale network. There is some complexity there. We’ve been working diligently to reduce it. And it’s just a just one of the fundamental connectors of telecommunications across the planet. So very proud of the network, very large network, and really happy to be here at Lumen.

    Scott Robohn • 03:04

    Your comment on reducing complexity, you know, you’ve had multiple acquisitions, networks that have been merged together. I’m sure that presents some challenges. Let me just ask you to give a little preview on how the complexity provides barriers to actually driving automation and orchestration, if you would.

    Greg Freeman • 03:21

    Yeah, no, I appreciate that. We do have really four primary networks that we care and feed for. So we still own AS1, although we don’t do a lot with it today. Love throwing that out because it’s kind of neat. AS1 is number one. Number one, that’s right. 3356 is the most interconnected.

    Greg Freeman • 03:40

    3549 is our VPN network. AS209 is our residential network. And we still have some assets from our legacy SAVIS network, AS3561. And then we have some of our TWTC telecom 4323. And so when you think about all of those various autonomous systems, all of those went in with a different set of standards. And we have collapsed some of those standards. But if I just log into some of those nodes, I can typically tell which ecosystem they came from.

    Greg Freeman • 04:11

    There are still some differences. So to your point, as we begin to code and we begin to automate those things, those differences do create some additional work for us. And so what we typically do is we don’t try to boil the ocean all at once, but we want to take a CI CD, a continuous improvement approach, and just have a standard where we bring in automation maybe for 3356. We get that working well, and then we’ll apply it to 3549. We get that working well, and we’ll apply it to 209 and on and on. And so we confront reality, we start where we are, and we edit and code based off of that.

    Scott Robohn • 04:52

    So early on, you know, you figured out you need to make progress from making this all work more automatically. Why don’t you take us back to the beginning and just give us a short overview on what you saw and how you got this effort started?

    Greg Freeman • 05:08

    Yeah, thanks. So as we look back in 2020, we were looking at our networks and all the complexity and the legacy merger and acquisitions. And when we look at our reliability, the number of human errors that we had in the network, and just how we were going to be able to scale with the number of people, we knew we had to change things. And so what we did was we said, we’re going to start with our people and we’re going to pivot our culture. And effectively, if you think automation 1st . You have a place here. We want you to be a part of this.

    Greg Freeman • 05:39

    And so, what we were doing is we were pivoting our employee base. We may have called it, maybe not intentionally at the time, Net DevOps, but it was effectively network engineers or technicians with some IT fluency. As long as they knew a little bit about those IT systems and they had this aspiration, we wanted them to be part of our movement. And so, we started reskilling our people to empower them with different skill sets, a lot of automation and orchestration. We also had at the time it was machine learning primarily, but we also had a stream for AI. And then we started just automating to get to that 80% machine to machine by 2025. And so, that started us on the journey that we’re on today.

    Scott Robohn • 06:23

    There’s one term that really hit me as we pulled the case study together. Yeah. Citizen developer.

    Greg Freeman • 06:29

    Citizen developer.

    Scott Robohn • 06:31

    Talk about that concept and how you took network engineers who were not automation interested, let alone automation literate, and brought them into a community with folks that were more automation and more software development literate.

    Greg Freeman • 06:45

    Yeah, no, I appreciate that. The citizen developer was a key concept for us because we couldn’t just go out and hire all new people to try to code our network. That doesn’t work. We need people with domain expertise, people who are in the business, who see problems that we have on the network, see the problems our customers have. And we wanted to embrace them. And so we did start out with people who might have the attitude or the aptitude to do citizen development. And what we wanted to do was create a safe mechanism that they could come into the program.

    Greg Freeman • 07:19

    We would train them in some of these IT fluent type of programs. We were setting up a platform as a service, an ecosystem that they could all work in. We have some of the partners of that ecosystem with us here today. And then we continuously improved their skill set and we recognized and we changed that culture. When we would have employee meetings, we would always talk about here’s some of the great work that we’re doing in our transformation in this machine to machine. And so that citizen development nurturing, critical for the program.

    Scott Robohn • 07:55

    I’m going to point out, ask Karan to comment on this. I think of the other panelists, I know you’ve been involved in this the longest. You know, what’s your commentary on some of these big changes that you saw from your vantage point? I know I’m putting you on the spot, but tell me, is Greg making all this up or you nodding your head and you see some different takes on it?

    Karan Munalingal • 08:17

    No, 100%. So from my perspective, working with Greg and his organization and service assurance, I think the term citizen developer, the approach that he took to bring the rest of his organization on this journey around automation 1st , was very different than some of the other customers that I’ve worked with. Namely, the 1st thing is he mentioned to all his folks is: hey, I don’t care how you automate. Just understand that the work that you’re doing manually today needs to be done by a machine with a lot of guardrails in place. So bring whichever technology that you want to the table. And see what difference it makes. So, with respect to, I know Greg mentioned the platform approach.

    Karan Munalingal • 09:03

    He mentioned platform as a service specifically. It’s because instead of having siloed efforts across various different teams, I think his mission was: if we provided you with the platform and we also enabled you to use any other strategy, whether it’s Ansible, Python, low-code, high code, it’s a combination of what an individual feels comfortable with. But at least have the automation 1st mindset. So I think that itself was a game changer, not only to get one person on the boat, two persons on the boat, but I believe, Greg, it’s what, 250 plus folks at Lumen who are actually automating and orchestrating today. Right. So, Scott, like what I saw as an approach was transformational, not, hey, we need to move from no automation to automation. It is you’re bringing the folks along who will see the value and will actually contribute more.

    Scott Robohn • 10:02

    That takes me to doing the hard work of workflow identification and taking things a step at a time. Greg, can you just speak to that? And use the numbers, number of workflows identified, how you’ve made progress in getting started, but still have a ways to go, right?

    Greg Freeman • 10:22

    We do. Yeah, so when we started, I was thinking more of cultural wins right out of the gate for the 1st few months. So when we brought our engineers and these citizen developers in, we said, what is some pain? What is some toil that you have every day? What would you like to automate, orchestrate? And so we celebrated those, and it was primarily for a cultural win. And so by the end of our 1st year, we had less than 16 workflows in production.

    Greg Freeman • 10:49

    And a workflow can be a number of automations that we put together to complete a business task. We had 16 of them. And we didn’t exactly know what key performance indicators we were going to be using when we started. We were going to be looking from a management side on what’s our return on investment, but that was really never talked about in that 1st year, probably 1st couple of years with our engineers. It was more about what’s some toil you want to eliminate. So today, as of this recording, we have 371 workflows in production. Those 371 workflows now run on the pace of a little over 5 million times per year.

    Greg Freeman • 11:29

    So if you do the math, almost 10 workflows every minute are launching somewhere on the Lumen ecosystem. The various Lumen networks, it doesn’t even have to be IP networks. It can be our transport network, our metro network, our IP or VoIP, all these various entities. We’ve been increasing exponentially. It was 16 after that 1st year. The next year, we had a little over 32 . We effectively doubled it.

    Greg Freeman • 11:54

    And then that 3rd year, we had 100 . And so there was an inflection point that we finally hit where things really began to gel with the team.

    Scott Robohn • 12:04

    Okay, cool. I want to talk about the idea of safe autonomy here. You’re a service provider network. You have responsibility for reporting outages to governmental entities. You have federal government customers. So you’ve got to be very, very careful on how stuff gets implemented. Talk about your approach to determining, okay, this is ready to be turned on in an autonomous way in the network.

    Scott Robohn • 12:33

    And I want to bring in the roles that Selector and Itential have played on this in particular. So talk about safe autonomy, and then let’s talk about the swim lanes that your technology partners provide.

    Greg Freeman • 12:45

    Okay, yeah, that’s a great call out. So you think about our ecosystem. We have 911 writing on our network. And when that’s down, people’s lives are at stake. And so we take that seriously. And so when we think about network automation orchestration, what we want to do is we started in small atomic workflows. And so we didn’t boil the ocean.

    Greg Freeman • 13:09

    We didn’t set out to say we want this one workflow to work across every single network, as I mentioned earlier. And so we started out in the early days. There was one configuration setting that we were looking at that was causing us outages that was our MinLinks, for those of you in the network world. And so with MinLinks, we would oftentimes erroneously have outages because of an incorrect setting or maybe it was changed at one point or the business changed the logic on us. So we used our orchestrators to change that. And so the system would look for a system for a change that was out of standard. Once it was deemed out of standard, the orchestrator would go out so we could have a platform trigger it off an alarm, go out to the network, and then just change it.

    Greg Freeman • 13:57

    We ran that. We had an engineer in the loop, a human in the loop, until we ran that well over 100 times. And we wanted to have zero defects for that. And once we saw zero defects out of those manual, it was actually a little more than 100 times that we ran it, then we just turned it on autonomously. So, why do we need a human to click the butt? So, that’s the approach: start where you’re at, have the human in the loop, small, composable, then make it autonomous. For the providers, we think of selector as the brains, if you will, helping show us signals or triggering mechanisms that would show us something that’s incorrect in the network.

    Greg Freeman • 14:41

    And then, Itential is the hands, the arms, the body that goes out and actually logs into the network to do that.

    Scott Robohn • 14:49

    Not one not being more important than the other. If you want to get anywhere, you need both, right? We could throw feed into the mix too, but I’ve already ruined the analogy for us there. But let’s ask Varija: you know, there are challenges on plowing through data and alerting here. So, Varija, why don’t you expand on that a little bit from your perspective and some of the issues that you saw along the way here?

    Varija Sriram • 15:16

    Absolutely, Scott. And I think, as Greg has mentioned, Lumen is such a vast, complex network. Having to simplify observability on such a vast network, and for the example that Greg mentioned, finding the non-compliant min-link configuration and then deciding and detecting when to trigger that action, right? And that is really what Selector’s core value proposition is. We are able to collect all the network indicators, detect such anomalous conditions, and trigger the action. Definitely, there are certain challenges in this process, Scott, and it’s not as simple as it may sound because the decision making greatly resides on the quality and data integrity. Because good data results in automating workflows, bad data results in automating mistakes.

    Varija Sriram • 16:10

    So, we do not want to make decisions on inaccurate data. So, that is also where, as Greg mentioned, the citizen developer, we have very closely worked with Lumen’s team to ensure that the data collection, the data accuracy, and the data integrity is maintained with frequent updates, making sure there are no data gaps and taking the right decisions. And absolutely, I would also like to mention that it is not just the AI and automation that helps the closed-loop remediation and automation workflow. It is a strategic partnership, and the vision that Greg had for the Lumen network observability and workflow automation. And it is a strategic partnership between the teams that is able to identify, decide, detect, and take action on it. Scott.

    Scott Robohn • 17:05

    Yeah, that came through very clearly in many of the case study interviews and discussions. You know, I think, Greg, you defining the roles that you need, those swim lanes, and then Itential and Selector coming along and playing, cooperating within those swim lanes. I didn’t hear one person complain about it in any way, shape, or form. So that’s pretty awesome to see. Greg, I want you to comment as we kind of close out, okay, what has happened. You had an approach to looking at ROI for workflows. Could you just lay that out for people too?

    Scott Robohn • 17:44

    That’s a really important concept that we can go beyond just hours saved.

    Greg Freeman • 17:50

    Yeah. Probably in year two is when we started really looking and being more thoughtful there after we had some of the cultural wins. So two different approaches we came up with. One was just OpEx avoidance. How much operational expense are we avoiding by having this automation? So what we did inside of our platform as a service is we started every workflow with a stamp. And a dollar value for that individual workflow.

    Greg Freeman • 18:17

    We have 371 of them. Not every workflow costs the same. Some are super complex and saves a lot of money when they run. Others are very atomic and not as much. So we put a dollar value for that. And then the system then automatically calculates if we ran this workflow 100 times and it maybe saved us $100, then it’s just a straight 100 times 100 savings. So it calculates that as OPEX avoidance, something we would have done that now this automation is doing for us.

    Greg Freeman • 18:46

    But if you think about it, what we’re doing when we’re making the network more reliable, when we’re avoiding these human outages, when we’re detecting things, maybe with our machine learning that we wouldn’t have seen before, when we’re being proactive, now we’re creating tremendous value in the network. So the 2nd metric we used is called value creation. And value creation is just using that same workflow example we had. Now that I have a platform that can run it every day, all the time, maybe I run it a million times a year. Well, I don’t get $100. Times a million for operational avoidance, but I do create some value. And so what we do is we have that calculation as well as here’s how much value we’re creating in the network with all these various entities.

    Greg Freeman • 19:38

    So OpEx avoidance and value creation.

    Scott Robohn • 19:42

    Those automations become enablers. When I could only do it once or twice a day before with a human operator, now I could do it 100 times a day. I have brand new capabilities, right? That’s like a clear win there. And that’s got to be a consideration for ROI for sure.

    Greg Freeman • 19:59

    That’s right.

    Scott Robohn • 20:01

    So, people can go read more on the case study. There’s so much that we’ve just touched the tips of the waves here. But in the last six to nine months, let’s just say that AI has entered the room and there are places to go that we’re all looking at both individually and together. Greg, I want to start with you on this, where, okay, you did things the old-fashioned way, you know, as it were. No disrespect to the tools that were available at the time when you started out. But now you’ve got new tools available mid-project, mid-multi-year project. Where are you going?

    Scott Robohn • 20:44

    Where do you want to take this with the tooling that’s coming? I know you have some fun examples that you’ll talk about. And don’t worry, Itential and Selector, you’re going to be next for, okay, where do you think you’re going on this as well? So, Greg, you start us here.

    Greg Freeman • 20:58

    Yeah. Realistically, we’ve been on the AI journey for a while, but it was classical machine learning. And so, now with the generative and agenic AI that’s come on the scene, that’s. That’s opened the door to a number of interesting capabilities for us. So, what we’ve been doing is we’ve been taking some of our cycles and we still want to continue our orchestration practice. And we are looking at some AI tooling there to help with some of the partners here to help speed up those workflow deployments. However, we also want to use AI to help us talk to our tasks, have a almost a GPT-like experience that we can now interact with our network, with our tool sets, with all the support stack that we want.

    Greg Freeman • 21:45

    So, we have been investing heavily with our development, with our citizen developers in this space. We’ve created a number of different tools that both help us internally. And more recently, we’ve been applying that same internal logic to customer-facing tool sets with the AI. So, be able to look at our network, maybe get forensics, get updates for some of those future. And we’ve got some interesting things coming with our external customers as well for AI.

    Scott Robohn • 22:16

    Yeah, the ease of interaction, you know, for the end user, you know, if I can just talk to something that’s really going to understand me or put it in an email, that’s a great ease of use mechanism there for sure. But, you know, John, you haven’t had a chance to chime in just yet. So I’m going to say, okay, you’re out there experimenting and doing some really interesting things here, and you’ve been involved in this project in multiple roles. Where do you see the huge opportunities here for moving things forward with AI tooling beyond machine learning?

    John Capobianco • 22:55

    Well, I think it rhymes with the automation journey. I think it really does.

    Scott Robohn • 22:59

    It’s a great way of putting it, by the way. I love that.

    John Capobianco • 23:02

    Well, I think. We’re going to start with a low-risk human in the loop, humans approving the agent’s actions, moving into human on the loop, ultimately to full autonomy, right? I really love the use case and the joint work between Selector and Itential and Lumen. We identified quite early, I would suggest maybe four months after MCP was actually released as a protocol by Anthropic. I think all three of us really looked at each other to say, what can we do with this protocol? How can this help all three of us succeed? How can we use this to augment and not throw away existing work?

    John Capobianco • 23:45

    And that’s sort of the journey that I think all three of us have been on together. And the fact that Selector and Itential work together, I think initially it was through webhooks, Verisia, but we quickly refactored that to be MCP, to be a little bit more flexible, a lot more natural language driven. So I think we’ll move in a similar fashion. And we started with a relatively low risk, let’s shut a port or no shut a port based on Selector identifying with a high degree of certainty that that port being shut has introduced a problem to kick off a deterministic workflow in Itential to go and deal with that interface and all of the surrounding Ticketing and alerting and notification and pre- and post-testing and feeding this through a loop a few times so that Itential actually feeds back to Selector to say, actually, we thank you for letting us know there was a problem. We believe we’ve resolved the problem. Let’s reinterrogate the problem, right?

    John Capobianco • 24:48

    So, there’s a lot of guardrails involved here to truly be autonomous and out of the loop. And I think this era of agents, we’re all trying to figure it out. We’re all trying to see where it applies. We’re all trying to see where the value of reasoning and tool calling can complement something like a deterministic workflow, but not having to have Greg rewrite 360 workflows from scratch to be agentic, right? There has to be a smooth path forward.

    Scott Robohn • 25:22

    Well, let me ask Varija on this front. So, that very rapid transition from webhook integration to MCP and some of the data collection, data quality issues, what do you think the challenges are in the near term and beyond from an observability platform perspective? And tell us some of the things that you’re bringing to address those.

    Varija Sriram • 25:46

    Absolutely, Scott. And I think one thing that I completely resonate with Greg and John is the fact that it has to be agentic. It has to be an MCP way of invoking these workflows. Going to the dashboard and viewing things is getting archaic soon. And I think, as Greg mentioned, the agentic development that’s happening at Lumen, where they’re able to then machine-to-machine call into selector or Itential to perform some of these automations. So, coming to your question, Scott, about the challenges, really, right? So, I mean, definitely just getting the observability aspect, getting all the complexity involved in some of this decision making is not, again, as easy as it sounds, right?

    Varija Sriram • 26:37

    It is the data integrity is involved. To be able to identify the right detection point is very important. So, definitely, the challenges that we see is that I’ll take an example of a chronic flap issue, that one of the workflows that we have automated at Lumen. So, the workflow is that when a critical interface is flapping and selector detects that it has been ongoing, that is where we trigger a remediation workflow towards itential and a diagnosis workflow and remediation workflow. So, now if the quality of data is not good, and if there are data gaps that are happening, and if selector incorrectly identifies that to be a data gap, and we go ahead and give a directive to reset a critical port, that is actually going to cause further damage rather than trying to remediate the issue. So, I think those are some of the challenges where we have to be absolutely sure in the right indicators, the right identification. Right before we even trigger some of these workflows.

    Varija Sriram • 27:47

    And that is where Selector is investing more into a reason-based AI and co-pilot. So, yes, Selector has identified a critical issue or a correlation and has triggered a workflow. But what is the reason behind it? And the end user, in this case, the Lumen team can actually question and say, was this the right action? And was this the right remediation that happened? And have a closed-loop validation that, yes, the action was taken, but did that really correct the outage that we had, or did that correct the closing the loop on the automation? So I think reason-based co-pilot is definitely a way forward in this, Scott.

    Varija Sriram • 28:32

    And going back to our fundamentals of The right network indicators, the right anomaly detections, the right correlations is something that we absolutely cannot forget. That’s our fundamental layer. That’s the fundamentals of observability. But beyond that, understanding, validating the workflow comes in handy to make sure that we are making the right calls. And the brain is not, you know, the brain is acting the right way and it’s not going off guard offraels.

    Scott Robohn • 29:02

    For sure. You also expanded a little bit on observability information on the workflows themselves and correlations between multiple workflows that might fail. Can you talk about that a little bit as well?

    Varija Sriram • 29:15

    I would love to, Scott. So that’s another thing that we have worked on after the case study went out, where Selector is now adding an observability layer on the workflows that have been automated between Lumen and iTentral. That is going to give us a preview, a view of what are all the workflows that are running, what is the latency of these workflows. If there is a choke point or a bottleneck where these workflows are looking to interact with another MCP tool, or they are stuck at a certain point, or they’re not able to get into the network or log into certain devices. So, we are able to then identify where the workflows are getting paused, or even if there are failures. Of course, we, I mean, they’ve been automated multiple times. We don’t expect to see failures, but if there are failures, is there a correlation between the jobs that are failing?

    Varija Sriram • 30:10

    That will help us improve the validation of the workflows and basically improve the performance of the automation.

    Scott Robohn • 30:20

    I do want to move on to what’s happening with the Itential platform as well. But, Greg, any commentary on how you see yourself consuming and employing these advances in observability info and on observability and workflows in particular?

    Greg Freeman • 30:38

    Yeah, that’s one we’ve been working on for a bit. Because if you think about 5 million runs on the network and it’s growing, yeah. We’ve been measuring our error rate. So we have it less than 2%, but having that causation, not just correlation, I mean, we can correlate a lot of different events that happen, but really what’s the underlying cause of it? That’s what we’re working on. And so we want to continue to make all of our workflows even more resilient. 2% error rate, less than that.

    Greg Freeman • 31:08

    It’s good. That’s what we set out for. We had it much higher initially. In some months, it was higher, 6%, 8%. We’re down below 2%. So those are exactly the things that we have to do to maintain high availability. Very good.

    Scott Robohn • 31:25

    Well, let’s pull on the agentic thread, if you will. And let me set this piece of the conversation up this way. I think I’ve hinted at it before. You know, that workflow definition, it’s not exciting work. But Greg, in his extensive leadership capabilities, got lots of people to identify and write down details of workflows, extract them from mops, figure out what they do on a day-to-day basis. I know you went through multiple tools as well to try and say what would make the most sense. But now there’s some new interesting capabilities that can actually accelerate workflow definition, automation, and orchestration.

    Scott Robohn • 32:10

    And I don’t know who to, you know, to tag 1st , Karan or John on this, but I want to hear what Itential is doing in this space and how this is going to accelerate and speed up this part of. Automating and orchestrating my network. I’ll leave it up to you two.

    Karan Munalingal • 32:29

    Yeah, I’ll take the 1st step. So, with the advent of AI, right? Like, as John, Greg, and Varija mentioned, I think AI came into everyone’s life super swiftly in the last 18 months. But before I go into what the providers like Itential and Selector are doing, what has not changed for Lumen and Greg is you still can break the network. The resiliency is still a requirement. So, I think this brings-you can break the network. It is in the realm of possibility, but you should not.

    Karan Munalingal • 33:04

    And that is a hardcore requirement. So, with that in mind, Scott, I think as the platform provider and Greg referred to us as the arms and the legs that actually make changes, update things across their infrastructure, it becomes very critical for us to securely leverage our integration points into their technology. But how do we also layer in reasoning, as John mentioned, right? The power of reasoning comes into picture where you have to strike the right balance: are you leveraging the deterministic automations and execution capability within the network? But at the same time, can I now enable the 250, but more so thousands of people at Lumen to automatically enrich tickets? Right, based on what is actually happening in the network. And this is where, you know, partnership with.

    Karan Munalingal • 34:03

    Someone like Selector comes into play is because they are also improvising reasoning to better provide RCA and signals. So, once we get that signal as a platform provider who’s automation orchestration focus, how are we providing that deterministic layer in combination with reasoning involved to accelerate how you can do this across multiple different vendors, multiple different environments, right? Because I think where AI is going to play a huge part, not only for us as providers for software, but also someone like Lumen, is can they take full advantage of multiplying the efforts? Because they already built 370 orchestrations. And as Greg mentioned, our 1st goal is to do that against certain vendors. Can we now leverage agentic AI to accelerate that across all vendors? Because that’s what reasoning brings to the picture.

    Karan Munalingal • 35:01

    So, I think from our standpoint, we’re doing quite a bit within the platform to adopt reasoning at the highest level because foundational models will help you with that to say, can I potentially eliminate some of the business logic? And it can reason through that error handling, reason through that. But when it comes time to make the change, we’re still going to rely on the combination of automation assets, whether it’s low-code workflows or Python scripts or direct APIs. So, we see this as an opportunity to strike the right balance to move forward with introducing AI, not only in our own software, but in customers’ environment. So, we can continue holding the autonomy, the safe autonomy that you mentioned early on, Scott, which is going to become even more critical because agents can now think for themselves when they’re reasoning. So, the guardrails now become ultra important, human in the loop becomes ultra-important moving forward across every technology that Lumen is interacting with.

    Scott Robohn • 36:10

    Could I ask you or John to maybe take us through an illustrative example of the effort associated with automating a workflow the old-fashioned way and now employing agents from Selector and from Itential that will reason together to really reduce the cycle time to turn a workflow into an automated workflow? How would you address that?

    John Capobianco • 36:39

    Well, we’re doing very much offering the similar experience that our developers went through to incorporate MCPs ourselves. It’s just in the form of an agent builder, right? So, what I think is interesting is striking the balance of determinism and reusing existing work. So, we can build an agent. It comes with the system identifier, you know, the persona of this agent, and then the user identifier, what its tasks are going to be. And then we can attach either MCPs like Selector. We could, from this agent, directly tap into the Selector RCAs and give the agent the ability to query Selector through natural language.

    John Capobianco • 37:20

    That agent can just speak to Selector without any swagger documents or special Python, or that’s the power of the reasoning, right? So, the combination of the Selector tool and the reasoning, it will figure out how to interface with Selector, that agent, right? Now, if we want to layer on, I think when it gets really powerful is when you sort of create a web of these tools. So, including your ITSM, maybe ServiceNow as an MCP attached to this same agent. So, the agent can run Greg’s deterministic Ansible flow that he’s invested a lot into and has perfected. That gets attached to the agent, the selector MCP, the ServiceNow MCP, a Slack MCP, an email MCP. Well, now suddenly you have a digital coworker.

    John Capobianco • 38:12

    It goes much more beyond just a playbook that’s running and that you have to ginger template every message or like there’s a lot of. You know, human in the loop required for filling out tickets, filling out the message that goes through Slack, Jinja templating and JSON files in YAML. All of that friction is smoothed away with the new agents. And I think all of us on this call have been paying very close attention. The capabilities of like Codex 5.3, Claude 4.6, Gemini 3.0, these are radically different than ChatGPT 3.5. Okay. We’ve made probably 30 years of progress, it feels like, in the last six months, in terms of the actual capabilities that these agents can have.

    John Capobianco • 39:01

    So that, that, generally speaking, increases trust, it increases accuracy, it increases the quality of these outcomes. And we’re still just kind of very early in this, right? We’re all sort of figuring it out, but the promise is there, right? I mean, Greg is not going to be able to scale biological beings to fulfill demand, right? So that previously was how can we do this through workflows? I think more and more it’s going to be how can we do this agentically, right?

    Scott Robohn • 39:35

    Well, I’ll throw that back at Greg. Or sorry, Karan, did you want to comment on that?

    Karan Munalingal • 39:39

    Yeah, I just wanted to add one more thing, right? So if you look at a citizen developer that went through their training of, hey, I’m doing things manually to now I’m writing Python scripts or developing low-code orchestrated workflows to now in the agentic land. So if you look at the before and after, the development methodology slightly changes, right? Now you have to be trustworthy that there are certain things within your workflow, within your process, namely, and I mentioned this early on, things like, Error handling, if else. So, for the longest time, because we did not want to break the network and we don’t want to break the network ever, there’s a lot of situations that you have to handle when you’re building out these orchestrations. If this happens, you do this.

    Karan Munalingal • 40:31

    If that happens, when you do this. With the advent of agents and reasoning come into picture, as a citizen developer, now I get to shift to language and words and intent. So, I can come in and say, I still want you to go do this specifically in the network, but now you try it three times before you write an incident or create a ticket and assign that to somebody. Right? So, this is an adaptive mechanism to make sure that if business processes change over time, now you have your pair programmer, which is your agent, to help you with that. And I just wanted to add that because that is, as a citizen developer, that is a before and after for me if I’m doing this work.

    Scott Robohn • 41:18

    That pair programming content or comment that you made, I’ve seen that as a very successful approach in many networks driving automation. And now your doppelganger for the development side can become an agent. That’s super valuable. So, Greg, you’re the convergence point here. Convergence. And let me ask you to think about it this way and respond. You talked about, I can’t remember whether it was year two or year three where you saw an inflection point when you really started to see ramp on getting workflows automated.

    Scott Robohn • 42:03

    And I know you’ve had to allocate resources differently to try and absorb everything that’s changing here so quickly. Do you see another inflection point coming with all this? I know I shouldn’t ask you to predict the future. Your crystal ball is not on screen, so I won’t out you on that. But where do you need to take this? And how are you going to use the tools that are being provided here?

    Greg Freeman • 42:29

    Yeah, and I appreciate that. If I think about it, we’ve still got 370 deterministic workflows. And so, what we’ve been doing over the last few months is with some of the MCP and some of the other AI constructs, we’ve been using that. Personally, to learn, experiment, get more familiar with it, but also to add more value and get our employees thinking in that mindset to allow that AI to call those deterministic workflows. Now, as the AI continues to get better, there will be an inflection point where we allow it to touch the network. I do see, similar to the orchestration journey, there will be an inflection point coming. And so, this year, we’ve said we want to spend a disproportionate amount of time now on AI.

    Greg Freeman • 43:15

    So, over half of all of our cycles, before it was probably 80% orchestration, orchestration, orchestration. This year, we said we want over half, well, over half-to be in that AI arena and then the smaller on our orchestration. So, when’s that going to hit? I suspect it’s probably at least another 12 months out for us to really hit that inflection point. But I would say we’re making really good progress. And whether it’s for AI that’s calling our deterministic workflows or AI that’s helping some of these administrative tasks, the toil that network operators have to have, we talked about ticketing. One of the ones that we use is reason for outage.

    Greg Freeman • 44:00

    That’s a very simple example. When a customer has a ticket or a problem on the network, they often ask, okay, send me a summary of what happened. I wanted in a reason for outage. That’s a great generative AI. And so when we pair that with some others, we can even make it agentic. So we’ve been working on that. And so, yeah, I’d say it’s probably at least 12 months before we’re really, really going.

    Greg Freeman • 44:26

    Things are getting better all the time, to John’s point.

    Scott Robohn • 44:29

    It’s a Beatles song about that, if I recall correctly. Sorry, had to work a music reference in there at some point.

    Greg Freeman • 44:35

    I was thinking, let it be. So there you go.

    Scott Robohn • 44:37

    No, you’re not letting anything be. You’re going to drive, drive, drive this, right?

    Greg Freeman • 44:41

    That’s right.

    Scott Robohn • 44:43

    So we’ve had some interesting questions come in, and let’s touch on a few of them while we still have about 10 minutes left for discussion here. You know, you’ve all touched on determinism at some level here. And one of our participants asked, how do you trust AI to be deterministic enough? And Greg, I’ll ask you to talk about that 1st , and I’d love to hear other panelists respond.

    Greg Freeman • 45:08

    How do you let it be enough? I’m here in Phoenix, and I often go to the airport. I ride in a Waymo. Waymos are the self-driving cars. That’s all AI. I trust my life with it. And so, yes, there was a lot of modeling that was done, a lot of experimentation, but over time it’s gotten better.

    Greg Freeman • 45:30

    And so, when I think of applying that to the network, it’s very similar to our prior journey. I don’t want to unleash it in its current state right where it’s at. I need to continue to experiment, learn that trust, trust with it, take small bites at a time. And then, as the models continue to get better, then we’ll allow it to be unleashed on the network and regaining that trust through every iteration we do. So, one step at a time.

    Scott Robohn • 46:00

    You have to gain trust in any new technology. You don’t download and then click the install wizard without running it through some paces, right? None of this has an install wizard, by the way. I’m dated in making that comment. But for the rest of you, determinism. How have you seen things change? I think we’ve all felt it in our interaction with different models.

    Scott Robohn • 46:25

    What do y’all think?

    Varija Sriram • 46:27

    So I echo with what Greg was mentioning. There is no Hollywood AI, Scott. And unleashing the AI on network is something, and because mission-critical services are running on network, e.g. , as Greg said, 911 is depending on Lumen’s network. So you can’t just make decisions and unleash the AI on the networking. So there’s definitely a lot of, I think, models, a lot of learning, machine learning, and then wetting out the system, understanding and validating the outcomes, the root cause analysis, the correlations. And that is where we make it deterministic. So it’s not of the board that it comes.

    Varija Sriram • 47:15

    There’s a lot of behind the scenes work to make it deterministic, but we are getting there. It is getting there. And one day, yes, just like how Greg trusts Waymo, we will trust the AI and networking to do what it’s supposed to do.

    Karan Munalingal • 47:31

    Scott, just one comment from me. I think by nature, you know, when you interact with LLMs for reasoning purposes, by nature, it’s a non-deterministic communication. Correct. I think everyone understands that. It goes based off of probability and inference, et cetera. But where do we see our customers getting that assurance that it’s okay to trust part of that? Is if you have a foundation that is strong enough, secure enough, deterministic enough, then you can start giving some leash.

    Karan Munalingal • 48:06

    To the power of reasoning, right? I think someone can think through. It’s like, yep, I’m just going to reason through the entire thing one day. But essentially, like, this is critical services that operates the business. So, from our perspective, instead of saying, well, how deterministic is AI? How deterministic can we have our foundation to support non-deterministic reasoning up top? I think that’s going to be the challenge that everyone will have to come forward with instead of making AI deterministic in itself.

    Karan Munalingal • 48:39

    And this is where, for agents that John had mentioned, this is where context comes into picture. What can the agent actually do in the network? You have the guardrails. You have full control to actually do that. I don’t know, John, if you have any comments on that.

    John Capobianco • 48:54

    Well, I think it’s an interesting question. I think that two years ago, right, I don’t trust AI at all. And we think of Will Smith eating the spaghetti, that sort of slop air with AI, right? But I think we’re closer now where people like Greg are saying, well, I trust it enough to do low-risk, at least read-only activities, and in some cases, configuration management activities, given the right parameters and given the right inputs and context. I think we’re going to be asking that same question of humans in the next probably year or so. How much longer do we trust a human to be the one operating the network over an artificial intelligence that has vast amounts of data, like in the selector platform, all of the telemetry, retrieval-augmented generation techniques, MCP techniques? Humans have put a lot of effort into augmenting what is a pretty remarkable technology as a foundation model.

    John Capobianco • 49:54

    Well, now, when we give it external access to other systems and now that it has its own reflective capabilities to reason, I don’t want to suggest that it’s better than your average network engineer today, but I think we have to face the reality that these agents will have parity with human-like capabilities in probably in the next three years, right at the pace we’re moving.

    Scott Robohn • 50:20

    Yeah. And, you know, just to throw it out there, even if you have a highly certified network engineer, you know, think about CCIEs, JNCIEs, you know, Arista level, what, a seven, I believe. No two people certified at that same level are going to know everything in the same level of detail. And there’s still some randomness or gaps in knowledge there where I can build models that are the sum of comprehensive knowledge for any one of those vendor certs, along with across different infrastructure providers. And not just IP, but optical information and security information and application performance information. I really feel like we’re on the threshold of some really interesting silo and boundary spanning here that we’ve been limited by human cognitive load. I’m now getting away from the questions and I’m engaging in commentary.

    John Capobianco • 51:19

    Scott, those people are still going to be very much required in that we need them to build the agents, right? Like that is ultimately what we want these highly experienced people to be focusing their efforts on now is, yes, still grooming junior technicians and helping human beings. But I think the most optimal thing, someone who’s been doing this for 25 years, who has those certifications is to try to help influence agents and artificial intelligence to reflect your own capabilities, right?

    Greg Freeman • 51:52

    Yeah, I think it’s skills. I mean, that’s effectively what it is. And really anything, even what we were doing, we were just giving people a different tool set, different skill sets. So instead of having to craft specific documentation, you’re just learning how to have a system craft the documentation as a human still very critical here.

    Scott Robohn • 52:16

    For sure. And I, you know, we talk about agent boss skills, right? And I think that’s a part of what we’re talking about here, learning what it means to manage and QA check results from agents. But I also would go a higher level and say there’s an opportunity for us to think more like architects and to understand what it means to correlate multiple domains of info together and kind of zoom out and have that big picture still grounded in the BERT tests I used to run on T1s, Greg, and other knowledge accumulated over years. We are about to wind this down. I want to give everybody a last opportunity to say anything that you think is important that we didn’t get to, and what you’re looking for toward the future. John, I’m going to put that to you, and then Karan, and then Varija, and Greg, we’re going to end with you.

    John Capobianco • 53:18

    So, John, again, I just don’t think that you can kind of sit this one out much longer if you haven’t started your journey. And that journey, for most of us, does start with, you know, just chatting with an LLM. But I wouldn’t delay your personal journey into building agents, to model context protocol, maybe even to rag systems. I think that it’s, I would hate to see a decade go by in networking, much like automation, right, before it becomes mainstream or before it becomes popular. I don’t think we can wait. I don’t think enterprises are going to wait. I don’t think Greg is going to wait, right?

    John Capobianco • 54:04

    So I think we’ve already crossed the Rubicon here and we need to collectively see how we can help each other out. See if there’s communities we can get involved with. See if there’s training or materials or the frameworks out there. Just try to get involved in this space because and maybe rightfully so a year ago you were skeptical about it right but I think a lot has changed with the new models and the new capacities and the new protocols. It’s not the same thing as it was three years ago or even two years ago. Karen, what would you add?

    Karan Munalingal • 54:42

    All right. So, what I would say, you know, Scott, based on working with active customers and all large customers, I think just thinking about AI as just another tool, which it is. It is just another tool in your tool belt to facilitate progress and potential velocity because you’re still going to have your scripts and flows and all the other system of records. That’s all going to be there. It’s just effectively: can you take the next step forward to adopt what the world is throwing at you and use it wisely to actually move forward? Which is exactly what Greg and the Gregan team are doing that, right? Like they are part of the innovation program with us with the whole flow AI concept.

    Karan Munalingal • 55:29

    Is they actually want to challenge internally to say what we did before with workflows. Can we do some of that with agents to adopt the value of reasoning? And this is the time to actually prove that out because, as John mentioned, six to 12 months from now, it’s going to start becoming a necessity for you to go that route, right? I think we always said doing more with less. Agents, AI will accelerate that motion of doing more with less because, Scott, you mentioned they have so much more knowledge. So, to the previous point, if you have clarity, if you are the SME, if you know the intent, you can strike it right now. Like, this is your opportunity to actually build the right things and have that leap forward instead of saying, I’m just going from no automation to some automation.

    Karan Munalingal • 56:22

    I think agentic AI and the entire framework enables you to leap forward and accelerate automation development, orchestration development with a lot of intelligence built in. So that’s what I would say.

    Scott Robohn • 56:35

    Agreed. Farija, from your perspective?

    Varija Sriram • 56:39

    Yes, Scott, to me, three things stand out: volume, velocity, and trust. Trust is in the reason-based approach to correlation, causation, detection, right? And that is something that all the providers, itential, selector, human, is heavily investing in. So, reason-based co-pilot. So, that takes care of the trust. Volume, we have to get going on many more workflows, automation 1st , right? So, volume is definitely key.

    Varija Sriram • 57:13

    And velocity, where the agents come into picture, it’s not just limited by human ability to be able to get this out of the box. It is where the agents come into picture, the agentic interaction, and the co-pilot. So, trust, volume, and velocity is where I think, yes, that’s like three.

    Scott Robohn • 57:31

    Thank you. Thank you for my small brain to comprehend that. Greg, you get to take us home. What does the around the corner of Lumen’s journey to safe autonomy look like?

    Greg Freeman • 57:47

    Yeah, so as we think about it, again, we’re going to build on a lot of our deterministic workflows in the near term to leverage these AI models. It’s just tremendous efficiencies in where it’s going. That’s going to become the table stakes for us. And so we’ve got a number of things on the drawing board that we’re going to put. We’ve already got some working for internal, and then we have some external facing. So our customers will be able to interact with our AI agents and to get the information to make our business as frictionless for them as possible. We’ve got to rethink how we’ve historically connected with our customer base.

    Greg Freeman • 58:25

    We’ve got to think beyond, please. Call, open up a ticket, log into a portal. There’s better ways that we can do things going forward. And so we’re going to appreciate the partnership we have with all of our partners here, the collaboration that we have. That just helps us get more intelligent and to push our business results forward. So we’re going to use AI to help further that end and keep our network and the internet running well.

    Scott Robohn • 58:53

    Well, between that and your 12-months-out inflection point, we’ve got lots of good reasons to do a follow-up on this a year from now or maybe less. But I just want to thank you all corporately and personally. The panelists today just provided a great conversation. All your colleagues who contributed to many, many interviews in the case study, my thanks go out to them too. It was an incredibly enjoyable and super interesting project at this time in automation and AI driving things further and faster. So thank you all. Greg, best of luck to you and your continued journey to see where AI tooling takes you and your networks.

    Scott Robohn • 59:38

    And to Varija, John, and Karan, keep up the excellent work. I’m sure we’re going to see interesting things from all of these vectors over the next 12 months and sooner. Well, at Solutional, we love trumpeting success stories like this in networking and IT operations. If you’re interested in more case studies and stories like this, or interested in how to advance AI in NetOps and across your IT infrastructure, you can send an email to hello@solutional.com or visit us@solutional.com . Any other questions that you have for our participants here, we’ll make sure that they get routed to the right people on the right team. Thanks for joining us and have a great day.