AI & AIOps

Building Trust in Non-Deterministic Systems: A Framework for Responsible AI Operations

Kristen H. Rachels

Chief Marketing Officer ‐ Itential

Building Trust in Non-Deterministic Systems: A Framework for Responsible AI Operations

Building Trust in Non-Deterministic Systems: A Framework for Responsible AI Operations

November 13, 2025
Kristen H. Rachels

Chief Marketing Officer ‐ Itential

Building Trust in Non-Deterministic Systems: A Framework for Responsible AI Operations

Every infrastructure leader today is under pressure to move faster – automate more, deliver instantly, and now, infuse AI everywhere. The tools are powerful. The opportunities are real. But the question that keeps surfacing in every customer conversation is simpler and far more human: can we trust it?

The New Trust Equation

AI introduces a different kind of uncertainty. Traditional automation is deterministic; it executes the same action every time in exactly the same way. AI, by design, is non-deterministic. It learns, adapts, and generates outcomes that may not always be predictable.

For infrastructure and network operations teams, that unpredictability is both exciting and unnerving. Automating repetitive work is one thing; allowing AI to take action inside live production environments is another.

That’s why at Itential, we believe the path to responsible AI operations starts with orchestration – the connective tissue that provides context, control, and confidence.

Why Determinism Still Matters

Before you can safely deploy AI across your operations, you need a foundation of deterministic systems – workflows that are consistent, auditable, and explainable. These systems serve as the “governed surface area” on which AI can operate.

We often describe this balance as AI on rails. The AI can observe, analyze, and even decide, but orchestration ensures that every action follows a verified, measurable path. This isn’t about slowing innovation, it’s about accelerating it responsibly.

One of our customers, Lumen Technologies, demonstrates this balance clearly. Over five years, their team built more than 350 orchestrations that now run millions of times each year. Those workflows form the deterministic backbone that makes their adoption of agentic AI both safe and effective.

Their lesson was simple but profound: AI isn’t here to replace deterministic systems, it depends on them.

The Role of Orchestration in AI Trust

At its core, orchestration defines how automations, data, and AI interact. It answers critical questions:

  • Which systems can AI access?
  • What actions are allowed without human review?
  • When must approvals or rollbacks be triggered?

This governance is what builds trust. When every AI action passes through a consistent orchestration framework, you gain the ability to measure, trace, and control it – the same standards operations teams already apply to automation.

That’s why we see orchestration not just as an enabler of automation but as the trust framework for AI.

Building the Human Layer

Even as technology evolves, trust will always be human. The most forward-thinking organizations are investing as much in people as they are in platforms. They’re reskilling engineers into NetDevOps roles, giving them the tools to design and manage orchestrations that interface with AI.

These engineers are no longer writing endless scripts; they’re architecting intelligent systems that learn from human logic. It’s a shift from “doers” to “designers” – from manual troubleshooting to continuous improvement.

This cultural transformation is what allows organizations to innovate confidently instead of cautiously.

Lessons from the Field

Across our customer base, three consistent patterns emerge among teams that are successfully operationalizing AI:

  1. They treat orchestration as a product, not a project. It has owners, roadmaps, and lifecycle management, ensuring continuous improvement and visibility.
  2. They define their AI governance model early. What data AI can access, how actions are approved, and how outputs are validated are decided before deployment.
  3. They measure trust, not just performance. They track accuracy, stability, and operator confidence – metrics that matter as much as uptime and latency.

Customer Spotlight: Lumen Technologies

Few companies illustrate this evolution more clearly than Lumen Technologies. As one of the most connected networks in the world, Lumen faced a familiar challenge – scaling operations faster than human capacity.

Under the leadership of Greg Freeman, Vice President of Network & Customer Transformation, Lumen made the decision to “invert the pyramid,” aiming for 80% of network touches to be machine-to-machine. The team invested in deterministic workflows, reskilled engineers, and built an orchestration model that connected automation, data, and now AI.

Today, those 350+ orchestrations run millions of times each year – adding, removing, and validating configurations, rebooting hardware, and communicating with customers automatically. AI acts as a feedback loop within that framework, helping to predict issues, recommend actions, and even execute them under governed conditions.

As Greg shared during Selector’s AI Summit for Network Leaders, Lumen’s success wasn’t about chasing AI for its own sake. It was about building the operational maturity to support it.

We use AI’s non-deterministic power to launch deterministic workflows engineered by humans – because those workflows are rock solid.

– Greg Freeman, Vice President of Network & Customer Transformation, Lumen

That mindset – AI extending trusted orchestration rather than replacing it – is exactly what defines responsible adoption.

The Responsible Path Forward

AI will continue to reshape infrastructure, but the organizations that succeed won’t be those that automate the fastest. They’ll be the ones that automate the smartest – with orchestration at the center.

At Itential, we see orchestration as the mechanism that turns AI potential into business value. It ensures that innovation doesn’t come at the expense of reliability, and that every automated action – whether triggered by a script or an AI agent– aligns with operational intent.

AI may be non-deterministic, but your outcomes don’t have to be.

If you’re building the next phase of automation in your organization, start by building the framework of trust that makes AI safe, scalable, and explainable.

Watch how Lumen Technologies applied this approach to evolve from automated workflows to agentic AI and what that means for the future of intelligent infrastructure.

 


Explore More from Lumen’s Journey with Itential

Watch below how Lumen applied this approach to evolve from automated workflows to agentic operations – and what that means for the future of intelligent infrastructure.

Heading to AutoCon 4? Make sure to catch Greg’s closing keynote and stop by the Itential booth to see how we’re helping enterprises safely scale AI-powered operations.

Read the full Lumen customer story →

Kristen H. Rachels

Chief Marketing Officer ‐ Itential

Kristen serves as Chief Marketing Officer for Itential, leading their go-to-market strategy and execution to accelerate the adoption and expansion of the company’s products and services.

More from Kristen H. Rachels