How Lumen Is Closing the Loop on Network Operations

In this AutoCon 4 keynote, Greg Freeman, Vice President Network and Customer Transformation, shares Lumen’s multi-year journey from task automation to workflow orchestration, and how they’re layering AI to trigger deterministic workflows with guardrails.

What You’ll Learn:

  • From Automation to Orchestration
    Why “stringing automations together” into workflows is what actually scales across teams and domains.
  • Safe Autonomy, By Design
    How Lumen uses non-deterministic AI to initiate actions, while deterministic workflows enforce consistency, checks, and control.
  • Ops Outcomes That Matter
    How Lumen operationalized the pillars: don’t let it break, fix it fast, communicate.

Itential is the Workflow Orchestration Platform Powering Lumen’s Journey

When Greg Freeman talks about moving beyond task automation to workflow orchestration, he is describing the role Itential plays inside Lumen’s operating model. The 350+ deterministic workflows Lumen runs today are built and executed on an orchestration platform designed to coordinate people, systems, and network actions at scale.

This platform sits between intent and execution.

It integrates northbound with ticketing, alarms, analytics, and customer portals, and southbound with network automation tools and device APIs. That positioning is what allows Lumen to take signals from AI, monitoring, and users, then safely execute complex, multi-step workflows with consistency, checks, and control.

Orchestration as the Foundation for Safe Autonomy

Lumen’s approach makes a clear distinction between decision-making and execution. AI and analytics help determine what should happen. Itential orchestrates how it happens. Each workflow encodes human wisdom: pre-checks, approvals where needed, post-validation, rollback logic, and evidence capture.

This is why Lumen can trust workflows to reboot cards, reroute traffic, diagnose services, and even expose self-service operations to customers. The orchestration layer ensures actions are repeatable, governed, and auditable.

A Platform Built for Scale, Not Scripts

The scale Greg shared on stage only works because workflows are treated as products, not one-off automations. Itential provides the structure to design, version, reuse, and evolve workflows over time, while abstracting the complexity of underlying tools and domains.

As Lumen layers AI agents on top, those agents do not act directly on the network. They call orchestrated workflows. That separation is intentional. It is what allows Lumen to innovate quickly without sacrificing safety, reliability, or operational trust.

Results at a Glance

60M+

Tasks executed per month

>350

Workflows built with Itential

4.5M to 5M

Workflows per year

80%+

Machine-to-machine interactions

  • Video Notes

    00:00 Introduction
    04:50 Lumen Network Overview
    07:38 Transforming NetDevOps Culture
    13:32 3-Pillars Operational Framework
    17:30 Automation Cost-Benefit Analysis
    21:59 Employee Training Strategy
    27:16 Platform as Service Architecture
    31:40 Service Diagnostics Workflow Demo
    35:30 AI Integration Overview
    46:36 Guidance on AI Implementation

  • View Transcript

    Greg Freeman • 00:09

    Let me just say a little bit about uh the talk you’re about to hear on on Speaker Greg Freeman. The easiest way to say this is Greg is a leader in network automation. And uh as I’ve gotten to know him a little bit over the past year plus. It’s really been amazing with the longevity of a multi-year project within his networks to really drive automation, orchestration, and go beyond. I can’t say any more about him any better than a very special guest that Greg has invited, I mean created, um, to do some of the duties here. So let’s uh let’s tee up this uh for the the start of the talk. And Greg, you come on up when you’re ready.

    Greg Freeman • 00:51

    Hey, good morning, everyone. I’d like to introduce you to a digital version of myself, AI Greg. You can see that we both wear the same set of glasses. We’re wearing the same shirt. We look very similar and we sound very similar. But that’s really where the similarities are in. You’re looking at someone who has more knowledge than I do about the network.

    Greg Freeman • 01:11

    AI Greg has more knowledge about technology than I do, and AI Greg has more knowledge about baseball than I have. He’s ready to take the stage and welcome you to NAF. Yeah, Greg. All yours.

    Greg Freeman • 01:25

    Woo! Now listen up. Good morning and welcome to the final day of the NAF conference. For the past few days, you’ve been dialed into the mechanics of the future. AI, orchestration, and code. That’s your knowledge base, baby. But now, the real Greg’s closing keynote, the Net DevOps journey is where we shift gears, big time.

    Greg Freeman • 01:42

    It’s not just about code or servers or even about AI itself. It’s about something far more important. Listen carefully. It’s about the Mon Wisdom W I S D O M of the people who got us here and the people who will lead us into a world of hot intelligent autonomous systems. My human counterpart is ready to take the stage and show you the real difference.

    Greg Freeman • 02:02

    Woo.

    Greg Freeman • 02:09

    I created AI Greg in the summer. It was in June, and I was so proud. So I decided to invite my wife in to see a video of my creation. And so I had the real Greg, and then I had AI Greg. And so I started playing it, and my wife paused and was like, wow, that is creepy. And then she saw AI Greg. So it was uh even even more uh disparaging.

    Greg Freeman • 02:35

    So I’m Greg Freeman, and I wanted to take a minute to talk a little bit about the journey we’ve been on at Lumen. What you just saw is the state of AI now, not the state of AI tomorrow. You too can create your own digital twin. That is an interactive avatar that you can talk with. You load it with knowledge, it will look a little like you, it will sound a little like you, and it will know way more than you know about whatever you program it for. And you can carry on with an interactive conversation. And so that video was just a little bit of an interactive conversation I was having.

    Greg Freeman • 03:09

    And so if you think about the state of now, what happens, and consider if someone were to call you that’s maybe your boss, and they said, Hey, I need you to null route this IP space, and it sounds just like them. Do you have process and procedure that you would question that? Or maybe you just want to be more productive with your time. What you could do is you can send your digital version today to a Zoom call. You can have it not say a whole lot, just nod politely, take notes and email them back to you, or you can have it interact. That is where AI is today. Imagine where it’s going to be five years from now.

    Greg Freeman • 03:49

    So today, what we see is that this has a tremendous amount of knowledge. AI has a lot of knowledge, but it doesn’t have that human wisdom that is needed for us to move things forward. So people often ask, what is the difference in knowledge and wisdom? I think it’s a very simple definition. I’m a very simple person. Knowledge, e.g. , is knowing that a tomato is a fruit. Do you know that?

    Greg Freeman • 04:16

    Wisdom would say you don’t put a tomato in fruit salad. Try that at Thanksgiving and see how it goes. And so today what I’m going to do is share with you the wisdom that humans have been involved in a journey. The journey that we’ve had at Lumen. So to begin our discussion this morning, as we talk about our journey to Net DevOps using this human wisdom, who is Lumen? If you’re an NFL football fan, Lumen Field is home of the Seattle Seahawks. We sponsor that NFL stadium.

    Greg Freeman • 04:50

    We have four distinct brands. The 1st one is Lumen Technologies. Lumen is for fiber optics, Internet Services, IPvPN, Metro, SASE, SD WAN, all of the things that one would need for companies and enterprises, hyperscalers for communications. We have Quantum Fiber, that is our residential brand. I’m in Phoenix, Arizona. I have fiber to my home. That is part of our quantum fiber brand, and we’re actually selling that to ATT at this moment.

    Greg Freeman • 05:23

    We have our legacy brand that most people recognize the name more CenturyLink. That is our our history of telephony. That can be plain old telephone service pots and that can be high speed DSL. We still have DSL in those markets, and we have the LEC market. It’s very profitable and a very important part of our business, but a lot of copper there. And then we have Black Lotus Labs, which is our security arm. Now, with that, Lumen runs the largest interconnected network on the planet.

    Greg Freeman • 05:53

    You can Google what is the world’s largest ASN. You can go to the Cater Report, CADA, and say what are the most interconnected ASNs. You will see that AS3356 has been the most interconnected network from the cones, the number of ASNs behind it, the number of IP prefixes for over the last 15 years. We also, my favorite one, do you know we run AS1? How many people have connected to AS1? If you did, it was a long time ago because we took it all private and it’s only internal now, but we still operate for internal traffic AS1. We have over 340,000 miles of transport long haul fiber.

    Greg Freeman • 06:32

    If you count our metros and all of the mileage, we have millions of miles of fiber in the ground, and we’ve committed to putting millions more fiber in the ground over the next few months. We have over 163,000 buildings on net, and we carry over 350 terabytes on our core on 3356. And so What’s the context? Well, I’m in day two operations. And so in day two operations, all the core assets, all the operation, effectively what does or does not happen on the network, that is our team’s responsibility. And so we have a tremendous

    Greg Freeman • 07:10

    Responsibility and opportunity to care and feed. It is a privilege to run the largest networks interconnected on the planet. So, what we’ve been doing over the last few years with our journey is we’ve been pivoting our culture. We started in earnest in 2020. We wanted to pivot to a Net DevOps mindset. And if we’re going to do that, our thought, what we wanted to do is we wanted to start with our people. Our people are our most important asset.

    Greg Freeman • 07:38

    And so, like many tiered providers, you will have three tiers of support. You will have tier one that may not be very highly technical, but you will have a large population of them. So think of this pyramid as the total population of employees that we had supporting the network. As you move up the stack, technically, tier two, network engineers, very valuable, know a lot. A lot of us in here are network engineers, and tier three was our top echelon network engineer, those subject matter experts, maybe principal engineers, the ones we we hold dear. What we said we were going to do in 2020 is we were going to invert that pyramid. We acknowledged our business reality that our networks were not getting any less complex.

    Greg Freeman • 08:22

    I’ll show you in a slide in a minute. We have four primary networks that we continue to run. We have a number of acquisitions and mergers, so there is a fair amount of technical debt that we’re working to simplify, and we’re in operations. Operations, as we’ve heard over the last few days, is seen as an expense. And what we need to do is we need to reimagine it to think of it as a CapEx or capoe, so it’s an investment into making our network more reliable, having more features, and having more things that we can put in front of customers to solve business problems. So we were going to invert the pyramid. We acknowledged we likely weren’t going to be getting more people.

    Greg Freeman • 09:02

    It might actually shrink as you see the pyramid here. And so what we wanted to do is have our top tier people and have a bunch of them within the next few years be automation engineers. Next down the pyramid would be network engineers, still very critical to us, still very important, but now they’re one rung lower. And we wanted to convert as many of our network engineers to automation engineers as we could. However, we would also augment them as we had attrition. We would hire maybe people who were more familiar with DevOps. We always love to find network engineers who are net DevOps.

    Greg Freeman • 09:39

    Those are more difficult to find. So just acknowledging again, confronting reality of what we had, we said we’re going to hire more development people if when needed, if we can’t find that. And then over time, we’re going to, as we have attrition, the technicians, those tier one, a lot of administrative functions, we’re just not going to have as as many of them. So we started this journey to workflow orchestration. We started with the end in mind, and there’s a progression that we were going through. And at the time you see AI on the far right, and I’ll talk a little more about that. But we started with workflow orchestration in mind.

    Greg Freeman • 10:16

    We wanted to have a PaaS, a platform as a service that everyone could continue to contribute to and grow over time. And so we were starting to build our workflows, whether people really knew it or not. We wanted to be a cohesive system. So as we mentioned that people’s skills pivot on the 1st on the far left of this flywheel, that’s where we had to start. And so we spent a lot of time developing a culture and a training program to allow people some psychological safety to get there to become an automation engineer if they so chose. And if they didn’t, we said that’s okay. We still need network engineers as well.

    Greg Freeman • 10:56

    And so we will we will meet you where you’re at. Then we needed data. You’re only as good in your automations and orchestration as your data. There’s been a number of workshops, you hear that time and time again, but it is true, it is a foundational building block. Garbage in, garbage out. You can only be as good in your automation as your data. However, don’t let that stop you.

    Greg Freeman • 11:18

    Confront reality, start where you’re at, and you can still do some automation even if your data isn’t perfect. There are techniques to clean it up. Then we had task automation. Key misunderstanding a lot of people had, including a number of us in the program, was we were going to have a highly automated network, which was true, but we really wanted to have a highly orchestrated network. And the key difference, if you think about automation, automation, maybe as a network engineer, you create a set of code. And that code is really good. It solves a business function or a business problem maybe for you.

    Greg Freeman • 11:54

    It might make your life a little easier. But if you string a number of those task automations together and solve a larger business problem, that’s a workflow orchestration. So now you’re just not solving your problem, you’re solving maybe your teams or your organization’s problem. So getting in that mindset that we’re going to be solving business problems and orchestration was a key delineator for us. I think of automation. Or maybe even more modern ones with robots, where you’ll have one robot that’ll have the chassis of the car roll in, and then there’ll be a wheel put on it. Putting that wheel on is one automation.

    Greg Freeman • 12:36

    And then perhaps there’s a 2nd one where the transmission will get put in. That’s a 2nd automation. And then perhaps it’s a windshield, a 3rd automation. You string all those automations together, we produce a car, you get one workflow orchestration. And so that’s how we think a lot about it. And then in the middle, you can see we have the customer experience. We always want to be mindful of that.

    Greg Freeman • 13:00

    And we had AI at the center of this. We’ve been working with machine learning at that point, and we’ll talk more about our AI strategy and where we’re at today, but it adds on very nicely to that workflow orchestration. So that’s workflow orchestration. Now, what does it do, and which ones did we go after? If you look at the bottom, inside of day two DevOps, we have three principles. First principle what’s the easiest problem to fix? It’s the one you never get.

    Greg Freeman • 13:32

    If you spend more time on prevention, do you want to spend your time fire fighting, or do you want to spend your time on fire prevention? So we said our 1st pillar is we’re going to spend a disproportionate amount of our time on don’t let it break. Don’t let it break. Fire prevention. Number two, we’re a network operator. We run the most interconnected network on the planet. Things are gonna break.

    Greg Freeman • 13:59

    We can plan as best we can, things are going to happen. So when they do, we need to fix them fast. Pillar number two, fix it fast. So we wanted to have workflows that help us fix it fast. And then number three, and this is one that may not be quite as intuitive to some of the network engineers, is communicate. Effectively communicate with people in the mechanism that is easiest for them. Effective communication.

    Greg Freeman • 14:30

    So e.g. , one workflow, if you are a Lumen customer and you’ve seen some of the events that we have, some of the large outages, you probably received a bunch of photos of that outage. It may be construction like next door where they’re building a building. It might be a cut of our fiber that’s 10 or 20 feet down, and we have to have dig boxes and people on site and trench construction equipment. We send those photos because what we found in what our net promoter scores tell us, and I suspect it’s true of you, if if you had a problem, maybe it’s at the hotel, maybe it’s with your provider, and they fix it in 4 h , but they never tell you, what are you gonna be thinking for those 4 h ? These people don’t know what they’re doing. Are they just making all this up? You tell me you’ve got to send out some.

    Greg Freeman • 15:23

    I don’t I don’t know if I believe you. But if you tell them it’s gonna take 8 h , but I’m gonna give you hourly updates and it’s meaningful. What we found is more customers prefer that. And so that effective communication, do not underestimate its power. Those were the three tenets. So as we started with those three tenants, we wanted 80% of human hands out of the network within five years. That was the goal when we set down and started in 2020.

    Greg Freeman • 15:51

    At the macro level, 80% human hands out of the network, or said another way, 80% machine to machine by 2025. Since that period of time, we’ve developed 350 workflows. Actually, as of yesterday, there were 355 workflows. Those 355 workflows, again, number of automations. Any month we have over 60 million tasks that are run through those 350 workflows, and those 350 workflows on aggregate execute at about uh 10 workflows per minute. Or set another way, it’s somewhere between 4.5 million these days to 5 million workflows per year. And so those workflows, those 350, uh one of them I mentioned is sending out photos to communicate.

    Greg Freeman • 16:40

    One of them is a service diagnostic tool, and I’m going to show you a demo of some of these. One of them, it reboots cards in our network without a human touch. How many of you would be willing to allow an orchestration to go into your network and reboot a card without you looking at it? John’s the only one raising his hand. Good job, John. Yeah. And so we’ve got workflows, some of which are no human touch.

    Greg Freeman • 17:07

    We wanted to make it as closed loop as we could. Some of them are kicked off by humans, some of them have human in the loop. But in all of those, we want to hit those three pillars. Don’t let it break, fix it fast, and communicate. So, where does one begin? As we started, we looked at the automation cost versus benefit. Now, you some of you may have seen this graph.

    Greg Freeman • 17:30

    This is we pulled this one from Google, and this is about eliminating toil. How do we eliminate that busy work that make all of our lives easier so we can just continue to iterate? So the way the theory goes is I have a certain amount of time on the x-axis and I have work on my Y. And if you look at the line below, I have work on the original task, and then at some point I should have free time. But being innovative, we’re going to write code. So you write code, takes a lot more work, but then the automation takes over, and you never have to revisit it. That’s the theory that we want because then we build a library, we can grow on it, we can be innovative.

    Greg Freeman • 18:09

    However, if it’s best we we want, sometimes reality hits, and so you if you track the bottom, you could do the task manually, then you’re done, or you could write the code, then you could debug the code, then you could rethink the code, and then you have ongoing development forever. And so we talked a lot about okay, what is our cost benefit? How do we limit this type of action, acknowledging we want to eliminate the toil in our network, and how do we do it in such a way that’s going to be scalable? And this goes back, you’ve heard the theme this week. It is great to be innovative, but there needs to be a business problem that we’re solving, and there needs to be something, some value creation that we’re adding. We think of it two ways. I can have operational cost avoidance, OpEx avoidance, or I can have value creation.

    Greg Freeman • 19:08

    So the way I think about it, if you are remodeling your house. Every month you probably get a power bill or a water bill. That’s an OpEx , that’s an OpEx expense. It’s gone. And the business just sees that as overhead, you’re costing us money. But if we’re building and we’re remodeling maybe our kitchen, that is seen as an investment, and we can treat that as capital or CAPOE, capitalized expense. And businesses will see that as accretive to our business or beneficial, and so you’re more likely to get funding for what you create.

    Greg Freeman • 19:44

    And so that was how we were thinking about that. And we began with doing some hard work. I’d love to tell you there are shortcuts, but like in most things that are worthwhile in life, you have to do the work and you have to start. And for us, it meant that we had to standardize a few things. Now we didn’t there’s this tension, we don’t want to standardize everything, but we need to standardize at least enough base set that it’s not chaos. So what we did was we had two different types of documents that all of our members who were retrained had to adhere to. One was called a process description doc.

    Greg Freeman • 20:24

    So if you have a business problem that you’re trying to solve very quickly, write up a description, it may even require, and it does, a little flow chart of what you’re trying to solve. But it should be very doable for you in just a few hours. The 2nd was the solution design doc. And that’s taking that process documentation and putting it into a coded system. And so, with the process design doc and the SDD, I will tell you this was the number one cultural problem that we had. People who were network automation experts who’d been used to coding in their own silos basically said, You’re making me do a few hours worth of work, I don’t see the value. Why are you wasting my time?

    Greg Freeman • 21:15

    That was it. But what we were doing is we were reinforcing a different mindset, a mindset shift that we can no longer think autonomy of single code that we’re going to be doing. We’ve got to think design intent all the way across the board, business value we’re going to do, and with that solution design document, we’re going to have some people who will help review it, figure out what the solution is, and then see what gaps or what overlap or what we’re already working on. And so that was part of the design that we had there. And we had to upskill our employees to do that. If you remember back to the inverted pyramid we had, we wanted automation engineers. And so how do we do that?

    Greg Freeman • 21:59

    Well, we started with training. We had what I call our 1st followers. We had a group of between 20 and 30 people that we thought had the aptitude and the attitude to make the shift. It’s not enough to just be the best technical person. You have to get along with people and you have to lead and influence people. And so many people, I’m an engineer, double E, that’s my background. Engineers like to do things themselves from time to time.

    Greg Freeman • 22:30

    Maybe be I’m just gonna do my thing, I’m not a role model, I’m not gonna influence people. All of us influence people. Your peers are watching what you do, and whether you have a leadership position by title of manager or director of EP, or if you’re an individual contributor, you are influencing people whether you want to or not. And so we said we’re going to pick people who have both the aptitude and the attitude to be our 1st followers. We’re going to invest heavily in them with new training. We put them through 40 hours of specific training. We gave them a lot of required knowledge reading, and then we invited them all on site, wanted that human interaction so we can get that shared human wisdom to get our flywheel going.

    Greg Freeman • 23:17

    And it was a mindset shift that we were working on. We want all of our employees to embrace AI. We talked a little bit about this yesterday. We don’t want to have an org of secret cyborgs. We do not today want to have people who where we say we don’t want you using AI, it’s scary and it’s going to take over the world. We want to celebrate responsibly AI. We want to put in good guardrails that allow our people to grow.

    Greg Freeman • 23:43

    There’s a saying I like. We used it a lot early on. It’s called go fast and break things. How many of you want to in your network go fast and break things? As being the guardians of the network, when we break things, we make the news and not in a good way. And we have had workflows that have made the news not in a good way. And it’s not a call that we’re going to be reckless.

    Greg Freeman • 24:14

    But what it is a call is that we’re going to make some strategic bets. We’re not going to Vegas, putting all of our bets in and pulling a lever on a slot machine and hoping for the best. It’s more of we’re going to be strategic card counters. We’re going to go to Vegas and we’re going to make specific bets that are in our favor. We don’t have time on our side as a luxury. There’s competition all around. We have to grow more products, so we’re going to take some calculated risk.

    Greg Freeman • 24:42

    We’re going to go fast. We’re not going to be crazy, but when things go bad, that’s going to be more of a process problem. That’s more of my problem and my failings, not individual engineer failing. So setting up some of those psychological safety items was really key about going fast. Another way to say it, we just need to have a bias towards action. We need to be decisive. We can plan a long time, and that is great.

    Greg Freeman • 25:20

    Plans are worthless. But planning is everything. Or you can say it a different way. I really like that great American philosopher, Mike Tyson. Remember him? Iron Mike? He said everybody has a plan until you get punched in the face.

    Greg Freeman • 25:41

    And that’s really what it’s about. Plans are worthless because they’re gonna change very quickly. But if you’ve gone through the process at PDD, if you’ve gone through the analysis on how we’re gonna do this, that’s where the true knowledge and the true wisdom, the true goal comes in. And then we did a lot of things for employees. I believe it was Scott said he’s a Mac guy. For all of our developers in the program and our 1st followers, they all got Macs. Guess what all the other employees got?

    Greg Freeman • 26:11

    They got Windows. If they wanted to go to training, I’d set aside enough budget. I’d be denied zero training of any requests from any of those employees who were in that 1st followers group. And today we’ve got a good contingent of people here. Good to see you. Appreciate y’all for coming out. They’re in the program as well.

    Greg Freeman • 26:29

    So it’s like you want more training? It is an investment in our people. It is not an expense. It is we’re making things better. And so we have to create a culture that’s going to allow our employees to furt flourish and allow all of us to upskill our game. So creating that culture, recognition, we have a lot of people who do great things inside of Lumen. We’re going through a lot of network engineering things.

    Greg Freeman • 26:57

    The ones that get most of the recognition are in this program. So, where do we start? We were building a platform as a service. This is our very humble architecture. We started with ticketing at the top. Our internal system probably won’t recognize it. It’s OpsDB, or I’m sorry, Ops Console for ticketing.

    Greg Freeman • 27:16

    OpsDB was our CMD, source of truth. RFM, that was our alarming system. And then we have Splunk for data analytics. I suspect you only have heard of Splunk. And so again, we had to confront our reality of where we were. And so we said we’re building a PaaS, the platform as a service, everything southbound is going to be Ansible. And so we deployed Ansible for each of our networks.

    Greg Freeman • 27:39

    The green boxes on the far right, that’s AS209. The blue network, that’s AS3549. The red network, that’s AS3356, most interconnected network on the planet. And then orange is good old AS4323, which was TW Telecom. And so we deployed in our humble beginnings a dev, a test and a prod, and we had forward-deployed VMs that can speak Ansible. One of our standards, we said we’re gonna we’re gonna prefer Python and Ansible is the way we’re going to go to the network. And so this was in 2020, and we still had people who had and still do a little bit, maybe Perl scripts, and we still had for some things expect scripts, and we still had some bash, but we said going forward, we’re gonna standardize on that.

    Greg Freeman • 28:24

    There may be things come and go, but for now that’s what we’re gonna do. We had to make those standards. And then very quickly, what we did was we said, okay, here’s what we’ve grown. Every time we build a new workflow, as part of that process, we’re going to see technically what we need. We need an adapter, an API adapter into our PaaS. And so we had one set of team that was looking and building those APIs. So we did two variants.

    Greg Freeman • 28:50

    We did a Java-based mediation API, and then we did a Python API. And so every time we built one of those adapters into our paths, well, now the next workflow just became that much easier. The 1st , I would say 18 to 24 months, we were building the ecosystem or the platform while we’re building the workflows. And as the teams began to figure out effectively how to line up the process, the PDD, the SDD, to figure out how do I navigate in this ecosystem, to navigate what are the best practices, that’s when we really started seeing an exponential growth in our acceleration. So today we have OSI Layer One Transport. OSI Layer 2 Metro, ASI Layer 3 IP, and IP VPN, and then we have VoIP, all those forward deployed Ansible speakers. We have a this actually doesn’t show you the scale, I just ran out of room on the slide.

    Greg Freeman • 29:50

    We have over 50 VMs roughly with about a hundred thousand nodes roughly that we have in that Ansible stack. OSI Layer 1 is more of a TL1 speaker, so we use different partners like Netflix, Light River, and then we had to create our own TL1 custom wrappers for some of these various workflows we have. But the thought is instead of having, as John would say, glasses of pain or panes of glass, we wanted to have a cognitive knock that is those alarms would roll in, have a machine see it, have our paths go out and take action on the network. I tell people I’ve got hundreds of dashboards I ignore daily, and I don’t need another dashboard to ignore. What I need is I need a solution inside of a PaaS that sees that system and that alarm and then will actually reach into the network, reach into the ticketing system, reach into the alarming system, reach into maybe my sparing system, reach into my provisioning system, have all those northbound adapters that I can do something. I don’t want to just ignore an alarm. I want to have what we call a cognitive knock, a machine that’s doing a lot of this in the background.

    Greg Freeman • 31:00

    So today we are now over 80% of all of our interactions, all the configs on our network are pushed machine to machine, not human hands in our network. So here’s an example of one of the workflows, and oftentimes demos are worth a thousand words. One of the biggest challenges we have on our network with all that fiber, want to guess what it is? Fiber cuts. Why? Inside of the US, there’s construction going on, just like across the street. In the summer, this summer, the call before you dig of construction people are going to do that in the US.

    Greg Freeman • 31:40

    All national, 250,000, 250,000 call before you dig locates per month. It’s gone down a little bit for the winter. It’s about 180,000 last month. And so if you just do the prob law of probability and you say, hey Lumen, I want you to I want to buy an unprotected wave between New York and Seattle going through Dallas, what’s the probability that’s gonna get cut? The longer the mileage and the more construction going on in the US, the higher the probability. And there’s vandals too. And so this is a problem that our engineers had.

    Greg Freeman • 32:19

    If you look, this is just a work protect. So a customer can buy from Lumen a work protect service, green would be work, red would be protect. If there’s a cut, it fails over. That adds latency. And so what if you were an engineer and a customer had that problem? Are they going to ask you? I see additional latency, I’m not sure why.

    Greg Freeman • 32:39

    Can you tell me if that’s accurate? Is it healed? And can you roll it back? Oh, absolutely, it’s all good. Can I roll it back now? Oh no, no, no. I’ve got a data center.

    Greg Freeman • 32:49

    I’ve got to put in a change process, even though it’s a 50 millisecond rollover. I need you to roll it back Saturday night at 2 in the morning. And so what do we do? Well, network engineer, we need you Saturday, 2 o’clock in the morning to talk to this customer, roll it all back. So we orchestrate it. So this is a once we orchestrate it, we put it in our customer facing portal. And this is our customer facing portal.

    Greg Freeman • 33:13

    So a customer can log in, they click the service diagnostics, and you can see this is one workflow where it’s running a number of pass fail checks. And in this case, it identifies that the service the customer has is protected. It’s rolled, and it’s got 18.7 additional milliseconds of latency. And here’s the actual route across the Lumen network. Internally, we can map this down to the street view, but we give it on the Google map here. And you see it has the length and the latency. And so it looks like it’s about 60 milliseconds.

    Greg Freeman • 33:48

    So then to switch it back, we created one workflow that you put in a date and time, and then you hit effectively schedule. And that is a deterministic workflow. The wisdom of Lumen Engineers built that, and on the back end, this is what’s happening. It’s just a single workflow. Looks like the business logic if you haven’t seen the canvas, start to finish. And that one ran all the pres and post checks of is the other side stable. Took 2.3 minutes.

    Greg Freeman • 34:16

    So what used to be an individual engineer that would have to be scheduled at a date and time of choosing, that was toil that a network engineer would have. So that engineer built one workflow, that was a single workflow. We got it running internally, and then you historically the engineer would schedule it. And then we thought, well, hey, we’ve had all this value to customers, why don’t we put it in the portal? And that’s what we did. So now customers can self-serve. So we have 355 workflows, and more recently we’ve been adding AI on top of that, artificial intelligence.

    Greg Freeman • 34:51

    And our thought is since AI is non deterministic, we can use that non deterministic flow to trigger our deterministic human made workflows. So to begin, you see the document or the diagram on the right. Network orchestration, we want our AI to help us care and feed for that, or help us trigger problems in our network that it can now go out and take action. So there’s three types of AI, and it’s always good to level set because everybody thinks of AI differently. Machine learning is just AI that predicts. It’s been around for many years and was one of the earliest ones we used. Generative AI is AI that creates.

    Greg Freeman • 35:30

    That’s more of like a chat GPT, and that’s what people often think about when they think of AI. But the newest one is called Agenic AI. That is AI that takes action. AI that takes action. So our view, if you see the chart on the right, we might use just machine learning, we might just use generative, we might just use Agenic, but or we might use a subset or all of them together to help us care and feed for that network orchestration. And so here’s an example as we move through it. This is another ADVA workflow.

    Greg Freeman • 36:08

    So this is an Agenic solution. If you look down at the bottom, can you tell me the light levels for device? And then we’re going to put the device name in the port, and you hit enter. And as soon as that is hit, what’s happening is it goes in, and again, this is one of this would be workflow example number two of 355. This workflow starts up at the top. It’s this is you can almost think about it as the coding logic that it’s going through. And while it’s progressing, you’ll see some of these boxes change to bounce through it.

    Greg Freeman • 36:39

    It’s doing all of the variable arrays, it’s doing merges, it’s doing transformation of some of our data. Remember again, we don’t always have pristine data, but that’s okay. We’re gonna use the data we’ve got, we’ll make it pristine over time, and it’s calling a subset of automations. So you can see some of the little triangles there. So we put together a number of automations to simply go into the equipment, log into it, and retrieve all this information. It finishes and it did all that in 9 s . Nine seconds.

    Greg Freeman • 37:13

    So the Agenic solution is looking at the agent. It says, okay, in natural language, tell me the light levels. It finds the tool that we’ve developed, which is one of our workflows. And in natural language, it’s now returning the analysis, the recommendations, the DB light levels, and all of that. And you can see a little bit of what it’s doing. But think about that. What was my login?

    Greg Freeman • 37:37

    Did I have to know my login? That’s an OSI layer one. Did I have to know any TL1 commands? And by the way, that workflow demo before, in our network, we have Infanera DTN, DTNX, we have Sienna, NGO 6500s, we have Advas, we have Nokia, we have a lot of 1830s, we have a whole number of different variants. And so what we do is we iterate, we solve for one, and we keep moving. And so that’s effectively part of the power of this Agenic. It can go in, it can give us all these recommendations.

    Greg Freeman • 38:13

    You can see at the bottom, it’s saying here’s the workflow that the Agenic solution used. So this is an internal one. You can see it’s called AskGreg. And we use that, it’s based off of some of the Pydanic logic, to utilize all these different workflows that we’ve done, and then we’ve been pulling those out to put them into the right application that our our engineers would use. The thought is again put the right tool in the engineer’s hands, put all this AI in that logic. So looks very easy. There’s been a number of MCP think discussions this week, but this is how we were viewing the world.

    Greg Freeman • 38:53

    If you look down at the bottom, that’s where the tool sets are. So we’ve developed internally over a thousand APIs. We have over 350 workflows. We also have some other other type of connectors down at the bottom that are tools. And those tools we connect up via MCP to the agents that you see right there in the middle. And so you just saw one agent in action that was our transport agent. The ones that are more impactful and the easiest ones to start out with are ticketing.

    Greg Freeman • 39:26

    If you have a problem at Lumen and you say, hey, this was a painful outage, I need an RFO for that. Historically, we would have people who would read through the tickets, pull out the date and time, produce a very nice looking reason for outage dock. Want to guess how we do that these days? Reason AI to do that. And so with some of these agents, you’ll see one’s the ticketing agent. That’s a very good, again, eliminate some of the toil, some of the actions. And then we have a supervisor agent on top of that.

    Greg Freeman • 40:01

    At the top, we have a lot of these internal apps. And so you were seeing one internal app, but the thought is take some of the generative and put them in those apps, or even the agenic. So we’ve been deploying that. Non-deterministic AI connecting and calling deterministic workflows. So next demo back in AskGreg. And in this case, we’re using MCP for a different agent. And this one’s gonna talk about the health of our network.

    Greg Freeman • 40:30

    So very quickly, we can say, hey, what’s the the health, my router health? Can you tell me how it looks? And so this connects up to one of our partner companies we’ve partnered with that can help show us honeycomb graphs in this case of how all of the routers are looking for this particular subset. So in this case, it’s a fairly small number for us: 250 routers. There are not 29 that have warnings and none that are critical. But it believe it was Internet 2 was shown a little bit yesterday, and this one resonated. We’ve connected up an agent that has a workflow.

    Greg Freeman • 41:04

    One of our most run workflows, very atomic level workflow, is ping. And so you just say, what’s the latency between Los Angeles Priv to CERMACPing, C E R SERMAC Chicago? And then the tool says, or the MCP says, I think I have a tool that knows how to do latency. That’s probably a ping tool, as simple as that. And so now what the tool just did is it logged into the network. We run Juniper, we run Cisco, we run Nokia. I didn’t have to know my login, it just did it.

    Greg Freeman • 41:35

    And it returned, in this case, it looks like it’s about 42 milliseconds of latency time. Again, utilizing the wisdom of human workflows where we can now have non-deterministic AI running it. So what’s the state of AI? If you follow the tracks, there’s been two competing arguments. Wharton came out with a study in October that said 75% of AI projects are doing great and they are worthwhile. They’re seeing value. And at the bottom they said success is linked to organizational readiness and having a clear strategy.

    Greg Freeman • 42:14

    But if you go back two months, you might have seen the MIT strategy. And MIT did a case study, and they said 95% of all people who’ve deployed AI see zero value or zero return on their investment. So, how do we think about these two studies? And what I’ve learned over time is two things that may seem to be opposed can actually be true at the same time. There’s typically truth in both positions. And so if you look at the bottom of the MIT challenges, I said, yes, there’s not a lot for these enterprise customers. I think it was 300 or so customers they had looked at.

    Greg Freeman • 42:56

    The main issue was the learning gap, the tools and the organization. The technology is not the limiting factor. Organizational readiness is an effective integration and skills development are critical to their success. Or set another way, whether you take Wharton or MIT, culture matters. People matter. And if we don’t set the right cultural foundation at the beginning, the probability of success, whether it’s orchestration, automation, or AI, it is not going to go well. So we have to invest in our people.

    Greg Freeman • 43:34

    The last demo that I have for us is one going back to the fiber cut locations. Anytime we have a fiber cut, it takes us well over 12 hours to fix it. We have to locate where it’s at, we have to dispatch someone to assess it, and then we have to repair it. So what we’ve done more recently is we want to shave off as much time as we can. So in this one, this is an unprotected wave. And again, anytime you buy an unprotected wave, please, please, please have a diversity strategy. And a 2nd carrier may not be a diversity strategy.

    Greg Freeman • 44:07

    Zaleo and Lumen and ATT and Verizon are often in the same trenches along the same railroad tracks along the same interstates. And so here’s one that says, Hey, can you tell me where the fiber cut is for this particular ticket? And so you hit enter, and so the agent says, okay, I’ve got one agent that knows all of our facilities, our fiber, our hand holes, our manholes, our sites, our routers, our switches, it’s all in there. And then we have another data set that knows about all of those call before you dig connectors. And so what if we can now take all of that data and have our our AI agent look at it? And so this is what it returns. This is a Google map.

    Greg Freeman • 44:50

    We’ll zoom into it there and ask Greg. And if you look up in the top right, see that red one? That is a remote OTDR shot. And so with OTDR, it tells you a distance and a cable that you’re on of how far down the road it is. And so what we’ve done is we now calculate the GPS coordinate of it. The orange boxes, those are the constructions, the ones that actually called us. So we can say, okay, we’ve got all that construction.

    Greg Freeman • 45:15

    And then the blue box, because you have an A and a Z side, we do a remote OTDR from the other side, so we can now figure out where the problem is, and then we pull up all the construction people. And we use the AI to help us pinpoint that GPS and the most probable construction company. So there’s the longitude-latitudes of it. So the other day we called up one of our contractors after we saw a cut. So we called him up, and the guy answered the phone and we’re like, hey, do you uh you’re at the construction company? Would you happen to be out on site? And he’s like, yeah.

    Greg Freeman • 45:50

    And he’s like, would you have happened to cut our fiber? And he’s like, uh, yeah, I think I just cut your fiber. How’d you know? Um and it’s because of this. And you can see here we just emailed it out. So what we’re doing is because we have this intelligence, we’re pushing this out to our field so we can get a more quick jump on those fiber cuts. It used to take us a couple of hours just to locate.

    Greg Freeman • 46:14

    Because the way it works, it’s like, well, we know the cut is 10 miles on this cable. Well, where’s 10 miles on this cable going that way? And so it’s been a real real benefit for us. AI is important. Last last thoughts. AI development for stages, Gartner hype cycle, if you haven’t seen it. This is the way that most innovation triggers happen.

    Greg Freeman • 46:36

    Always enjoy reading these so you know where things are at. When you have an innovation triggered, you will have Great euphoria. And it gets up to the peak of inflated expectations. And then what happens? You get punched in the face. And you roll into that trough of disillusionment.

    Greg Freeman • 46:55

    But that’s okay. Because once you come out of that, you get on the slope of enlightenment. And so it’s okay to have pain and suffering. Because pain and suffering, that nobody likes that, but that’s how we get perseverance. Perseverance builds character, and character gives us hope. And so we get on that plateau of productivity and we have to narrow that trough of disillusionment. So the top of the peak of inflated expectation, AI agents.

    Greg Freeman • 47:20

    AI agents. So just acknowledge where Gartner says AI agents are, we’ve still got a little pain and suffering to do. So as we close some cautionary words about AI, what it’s not. A genuine understanding. It is not human wisdom. It is not consciousness. Today it’s just pattern recognition.

    Greg Freeman • 47:43

    It’s good at knowing facts. It’s good at knowing knowledge. It doesn’t know human wisdom. AI is not a universal human replacement. I love the quote by Sam Altman. AI will not take your job, but someone using AI might. And so that’s very true.

    Greg Freeman • 47:59

    Let’s not be an org of secret cyborgs. Let’s embrace AI and go out and do great things. And finally, AI is not a one and done development. If you remember back to that toil slide where you’re going to have work forever, we want to minimize that, but there is going to be upkeep with everything we do. So We’re at NAF. We are the Network Automation Forum.

    Greg Freeman • 48:23

    If you are here, that means you’re an innovator and you are leading and you are influencing people. And so I would just ask you as you go forward, be thoughtful of all the things that we’re going to do, the innovation we’re going to drive forward, because whether you want to or not, we are the ones driving it forward. And if not us, who? And if not now, when. So thank you so much for being at the Innovation Forum. We’re going to close with just a couple of words. Oh, maybe.

    Greg Freeman • 48:53

    One last moment, Network Automation Forum. You’ve seen the five-year journey, the 4 million orchestrated workflows, and the power of a genetic AI. You’ve seen the difference between the man on the stage and the face on the screen. Never confuse AI and humans. I, AI, Greg embody vast non deterministic knowledge. I can calculate and predict. But the journey that produced those 350 deterministic workflows, that’s the codified wisdom of human experience.

    Greg Freeman • 49:18

    That’s what will truly deliver the promise of autonomy and eugenic workflows. Go back to your organizations and be the wisdom that drives the future. Start wherever you are at in your journey, but start. Thank you for an incredible NAF. Now go out there and lead.

    Greg Freeman • 49:31

    Woo!

    Greg Freeman • 49:40

    Thank you very much. And so we have just uh a couple of time for a couple questions. If anyone so uh want the mics are open.

    Speaker 4 • 49:53

    Good, John. Greg, I just uh made a bit of an observation followed up by a question. Earlier this year we heard Jensen talking about the future of the IT department is going to be an HR department for agents. Um that seems to be reflected in your supervisor hierarchy. Can you comment on Maybe the similarities and the differences between biological beings and agents in terms of when you onboard a new agent, does it have to get to know its colleague agents and how to speak to the supervisor? Do you maybe have some insight?

    Greg Freeman • 50:27

    Yeah, no, I appreciate that. Yeah, that’s that’s likely how things are going to go. So if you think about that AI Greg persona, think of that as a virtual agent. And so there is going to be a time where there will be a hierarchy where it may be a human person reporting to a manager and a virtual person reporting to a manager. That is likely going to happen. And so there for the supervisor agent, one of the big things uh that we’ve been working on right now, the supervisor knows about all the different agents, but we haven’t played as much of having the different agents know about each other. And we’d streamline that to help avoid some of the confusion for some of the tool sets.

    Greg Freeman • 51:05

    So uh very good. Next question. Hello, Marco Martinez. I was wondering if you do any cost benefit analysis on Because AI needs compute, AI needs engineering, and a normal task would not be that hard to engineer in some cases, right? Yeah. Yeah, thank you.

    Greg Freeman • 51:22

    That’s a good good call out. So right now we want to be innovative. And so on our initial lab trials, we’ve burned 600 million tokens, and we put a dollar value to those tokens. And so if we already have a workflow, it goes back to that cost analysis. I’m gonna use a workflow. AI is a tool, and so where I I can trade speed for a new one, yeah. I may put AI in there.

    Greg Freeman • 51:48

    Uh, but right now I’m not gonna retrofit anything I’ve got because to your point, it’s different. And as the tune came out just a week ago, brand new, that helps us lower our dollars for for tokens. So uh the cost is changing greatly. And uh yeah, you can some people see the quote on the screen there. I didn’t reference it, but you can either once a new technology rolls over you, if you’re not part of or driving the steamroller, you’re part of the road. So that’s how AI is, and there is definitely the cost there. Uh last question, and we’ll we’ll wrap this session.

    Greg Freeman • 52:23

    Hey, good morning. I’m Matt Campbell with Blue Origin. Um I was just curious with you initially setting down this path of uh saying we’re gonna pivot and train our engineers to kind of pick up these concepts. I’m sure that there was a lot of uh individuals saying, I want to learn this, I want to learn that. Were you guys able to take what they’ve learned and distill it into any type of like internal training system that allows people to teach people within the company? Or kind of how did that look from just going external sources to internal sources? Yeah, no, I appreciate that.

    Greg Freeman • 52:51

    That’s a great question. So we did it what we call a federated approach. And we wanted to rotate engineers every month. We wanted the next cohort of engineers. And so when I mentioned we had those 1st followers, the reason we did that is we wanted to influence and distill that information throughout all of the various teams. So yes, there was a a set of documents that we put together, a set of training, and then we just continued to roll them through each month. And then for external things like this, that was part of that cultural benefit.

    Greg Freeman • 53:20

    You want to invest in that, I’m going to fund that. That’s an investment. That’s how we thought about that. And with that, again, thank you very much. Scott, back to you.

Ready to go deeper on Lumen’s journey?

Explore the complete customer story and see how Lumen built a disciplined path to safe autonomy.