Blog

What Lumen’s Network Automation Journey Reveals About Building Safe, Scalable AI Operations

Karan Munalingal

SVP of AI Strategy & Innovation ‐ Itential

What Lumen’s Network Automation Journey Reveals About Building Safe, Scalable AI Operations

What Lumen’s Network Automation Journey Reveals About Building Safe, Scalable AI Operations

January 22, 2026
Karan Munalingal

SVP of AI Strategy & Innovation ‐ Itential

What Lumen’s Network Automation Journey Reveals About Building Safe, Scalable AI Operations

You Don’t Leap to Autonomy. You Earn It.

For the last few years, the industry conversation around network operations has been dominated by a single idea: autonomy. Agentic AI. Closed-loop operations. Self-healing networks. The promise is compelling, and the pressure to move fast is real.

But many organizations are starting in the wrong place.

They are asking how to add AI to their operations before they can clearly answer a more important question: what would they actually trust to run without human oversight today?

Autonomy is not a technology milestone you deploy. It is an operational outcome you earn. And the path to it is shaped less by algorithms than by discipline, governance, and proof.

Lumen’s network automation journey offers a rare, real-world example of what it looks like to take that path deliberately and successfully.

Autonomy Is Earned, Not Deployed

At its core, autonomy depends on trust. Trust that systems behave predictably. Trust that actions are reversible. Trust that failures are visible, explainable, and contained. No amount of AI sophistication can compensate for an environment where basic operational controls are missing.

Lumen understood this early. Rather than treating autonomy as a switch to flip, the team treated it as a spectrum. Every step toward reducing human involvement had to be justified with evidence. Every automated action had to be observable, auditable, and safe before it could ever be considered for headless execution.

This mindset shaped everything that followed. The result was not a rush to AI, but a steady progression from manual work to orchestrated automation, and eventually to AI-assisted autonomy that operates within clearly defined guardrails.

Starting From Reality, Not Greenfield

Lumen did not begin this journey from a clean slate. The company operates one of the most interconnected networks in the world, built over time through mergers, acquisitions, and organic growth. The infrastructure spans IP and optical domains, legacy and modern systems, and a wide range of vendors and interfaces.

Like many large operators, early automation existed, but it lived in fragments. Scripts written to solve local pain points. Cron jobs and one-off tools that worked until something changed. Manual change processes that created bottlenecks and slowed delivery.

At Lumen’s scale, those limitations carried real risk. A small mistake could have an outsized blast radius. That reality made one thing clear: any move toward autonomy had to start with control, not speed.

From Scripts to Orchestrated Automation

The first major shift was architectural, not algorithmic.

Lumen moved away from treating automation as a collection of scripts and began treating it as a platform. Workflows were decomposed into atomic actions. Those actions were then composed into orchestrations that included pre-checks, post-checks, rollback logic, and full auditability by design.

Crucially, these orchestrations were built as if no human would be present. They were expected to validate state, detect failure conditions, and unwind safely on their own. Repetition and consistency became the foundation of trust.

This step is often underestimated. But without this level of rigor, autonomy is impossible. You cannot safely remove humans from the loop if the system itself cannot explain what it did, why it did it, and what happened as a result.

Clear Swim Lanes Between Thinking and Doing

As automation scaled, Lumen made another deliberate design choice that would prove essential: separating decision-making from execution.

Observability and AI systems were responsible for detection, correlation, and confidence. They became very good at answering the question “what is happening, and how sure are we?” Orchestration systems, on the other hand, were responsible for executing actions safely, under policy, with full governance.

In simple terms, Lumen made the AI the brain and orchestration the hands.

This separation reduced tool overlap, clarified accountability, and created a clean control plane for automation. It also made closed-loop operations possible without sacrificing safety. AI could propose or trigger actions, but execution always flowed through a governed path.

As autonomy increases, this distinction becomes more important, not less.

Measurement as the Gatekeeper of Trust

Trust at Lumen was never based on confidence or intuition. It was based on data.

Every workflow was treated as a unit of value. The team tracked how often it ran, how often it succeeded, and what tangible impact it had on the business. Two metrics mattered most: operational expense saved and value created through improved reliability and risk reduction.

Reliability was treated as a prerequisite for speed. Workflows that demonstrated consistent, error-free behavior over time earned reduced human oversight. Those that touched revenue or customer traffic retained approvals until the evidence justified a change.

This approach removed emotion from autonomy decisions. The data decided when something was ready to run on its own.

Introducing AI, Carefully and Intentionally

Only after this foundation was in place did AI become a meaningful accelerator.

At Lumen, AI helped reduce noise, collapse millions of raw alerts into actionable signals, and provide explainable context for why a remediation should occur. It shortened the loop between detection and action, but it did not bypass governance.

AI proposed actions. Orchestration enforced policy.

Human approvals remained where the blast radius warranted them. Over time, as workflows proved safe in specific contexts, autonomy expanded. The loop closed gradually, backed by evidence rather than optimism.

This is what AI-assisted autonomy looks like in practice. Faster decisions, fewer manual steps, and tighter feedback loops, without surrendering control.

Baselines & Culture Go Hand in Hand

During a recent webinar I did with William Collins, we also discussed the relationship between ROI and culture. Baselines reinforce healthy culture by providing clarity and fostering accountability. They encourage transparency and help the team stay aligned with the larger goals of the organization.

Teams that understand their baselines make more informed decisions. They work more collaboratively. They have clearer conversations with stakeholders. They build stronger trust. They also create a culture that values data driven improvement rather than intuition alone.

The Broader Lesson

Lumen’s journey is not a template to copy line by line. But the principles are repeatable.

Culture comes before tooling. Discipline comes before speed. Orchestration comes before autonomy. Measurement comes before trust. AI accelerates what is already well designed, but it cannot fix what is fundamentally fragile.

Autonomy is not about removing humans from operations. It is about freeing them to focus on higher-value work, while machines handle the repeatable, well understood tasks they have earned the right to perform.

You do not leap to autonomy. You earn it.

Learn More

To dive deeper into Lumen’s full network automation journey, including architecture, metrics, and lessons learned, read the complete customer story here.

You can also hear directly from myself, Selector and the Lumen team behind this transformation by registering for the upcoming webinar:

Close the Loop: Lumen’s Journey to Safe Autonomous Network Operations
Register here →

Both go deeper into how disciplined orchestration, measurable outcomes, and explainable AI came together to create a safe path to autonomy.

Karan Munalingal

SVP of AI Strategy & Innovation ‐ Itential

Karan Munalingal is the SVP of AI Strategy & Innovation at Itential. Previously, Karan ran systems engineering at Ciena, focusing on carrier ethernet and core switching platforms. At Itential, Karan drives AI strategy enabling global customers to adopt AI-driven automation journeys that modernize and scale network and infrastructure operations.

More from Karan Munalingal