AI is everywhere: your feed, your tools, your inbox. Everyone’s “AI-powered” now. But if you’ve ever let a model “help” with configs, you know the gap between hype and reality is about the size of a /8. So how do we move from flashy demos to something that ships, scales, and doesn’t melt your production network?
In this episode of Network Automagic, hosts Steinn Bjarnarson and Urs Baumann sit down with automation legend Peter Sprygada, Chief Architect at Itential, to cut through the noise and talk about what AI in networking really looks like. Peter’s built automation platforms from the ground up and watched the AI wave crash into NetOps.
His take? Most people are building MCP completely wrong – and making the same mistakes automation teams did a decade ago.
Expect laughs, some spicy takes on vendor shortcuts, and a blueprint for building the AI stack you’ll actually trust in production.
Why You Should Listen
- Hear Peter’s candid take on why most MCP implementations are broken – and how to fix them.
- Learn how to design MCP tools that expose safe, meaningful operations, not entire APIs.
- Understand why determinism isn’t boring – it’s what makes AI useful and safe.
- Get practical examples of reasoning + governance in action for compliance and drift detection.
- Walk away with a mental framework for building AI integrations that your ops team won’t roll their eyes at.
Catch The Full Conversation
![]()
Everyone’s doing MCP wrong. If all you’re doing is turning a REST API into an MCP server, you’ve completely missed the point.
— Peter Sprygada, Chief Architect at Itential
What You'll Hear
(So you can skip ahead, if you want.)
00:00 – Cold open and vibe check
00:49 – Guest intro and early automation mindset
04:30 – Shift to AI, hype vs. reality, and the missing question of “should”
12:08 – MCP lightbulb moment and why it changed his view
15:00 – Tools over raw endpoints: what MCP actually exposes
18:20 – Vendor pitfalls and auto-generated chaos
22:00 – Determinism vs. reasoning in the new automation stack
27:00 – Safe use cases: drift, compliance, and troubleshooting
34:00 – Why AI won’t replace engineers — yet
45:00 – Peter’s bold takes on the future of AI in networkingView Transcript
0:06
Well, hey there listeners, welcome to episode 6 of Network Automagic. We have a special guest for you today. Peter insert… I’m gonna play the American and not be able to pronounce his last name. So, Peter Sprygad. Peter Skepragad? Peter Sprigber.
0:25
Peter Spaghetti is what I’m gonna call him until he’s gonna say his own name and then I’m gonna learn how to say it. Because, you know, nobody can say my last name and he should suffer the same fate as me. And oops, I mean, Broman? Bruderman? Nobody knows. So, Peter, welcome to Network Automagic with VNERS. Would you tell the people a little bit about yourself that don’t know you?
0:46
Yeah, absolutely. Thanks for having me. So let’s start with my name because obviously that’s where everyone’s going to be curious about. So it is pronounced Spragata. And. Now, it’s not Italian though. See, that’s the thing.
0:58
Everyone thinks it’s Italian. It’s actually, it’s a Polish last name. It’s changed spelling. The proper pronunciation actually is Srigata.
1:05
So it has to be Polish, so it has to be a little depressed because it’s surrounded by enemies throughout the last three years. So it becomes like Sprigato.
1:15
There you go. There you go. So, but yeah, no, it’s great to be here. Good lord, I’ve been doing this for far longer than I care to admit. You can tell by the gray hair. I’m only 26. In all seriousness, you know, I’ve been doing network automation since before we really called it network automation.
1:38
It was just, you know, I’m lazy and I need to write scripts to do things on the box because, you know, after you’ve configured your 436,000th interface, you realize, hey, there’s got to be a better way to do it than you typing that P. But yeah, I’m traditional network engineer by trade. That’s where I really got my start. I go all the way back to the ATM frame relay days, believe it or not. And came up through a lot of the service provider technology track. I did Core IP and MPLS and Sudowire and all the good fun stuff. Went data center for a while, was with Arista.
2:10
And then I said, you know what? I want to do something new. So I joined this tiny little startup. I was the seventh software engineer to join the company. It was a little company called Ansible. And yes, as I’ve told many, many people, if you hate Ansible, especially if you hate Ansible networking, I am the person you can blame for that. It’s okay.
2:26
I got big, broad shoulders. I can take it. It’s all right. But yeah, so I was with Ansible for quite a while. Went through a Red Hat and IBM and said, you know, I want to go back into the startup world. IBM just wasn’t really doing it for me. So I jumped over to Itential and now I serve as the chief architect at Itential.
2:44
That’s a beautiful summary, dude.
2:46
Like, you know, it’s way more than you ever wanted to know about me.
2:49
Oh, no, I’m sure. I’m sure somebody cares. I mean, I bet your mom’s listening. I’m going to send her the episode. Don’t worry, bro. So, the question is, of course, we could make fun of him for a lot of things. We could go, hey, Grandpa, tell us about cell sizes.
3:02
Like, oh, what, 33% utilization on the frame really? Huh? Wow, wow. Bet you guys were happy back then. No, we’re not. We don’t do ageism on this show. What we’re going to make fun of him for is the Ansible part.
3:16
Well, now that it’s an IBM, it’s actually the best. Now that we have Terraform and Ansible, we get the best of both. So it’s terrible. So it’s terrible. Yeah. It’s terrible. It’s terrible.
3:27
It’s terrible. Terrible.
3:30
IBM, please. No, we would like your money as well. Please, I hear you’re handing out money. I would also like money. We’ll say anything. We’ll do anything. We take that back.
3:38
We take that back. Please edit. Cut that. Cut that. Cut that. So, introductions are done. Me and Urs, we’ve been regular guests at your booth at Otacon.
3:50
That’s probably where we met in person. We’re going to be in Texas, right? We’re going to Texas. Yay! Lone Star State. I am not going to make any Texas jokes because I need to clear customs and border patrol. So I have nothing but admiration for the Lone Star State.
4:10
I am looking forward to trying biscuit. That’s all I’m going to say about that. Love those guys, though. Yee-haw. I need to get a belt buckle. We’re going to get Urs. We’re going to get belt buckles.
4:20
There you go.
4:20
There you go.
4:21
I at least need a hat.
4:22
Yeah, we need that hat. I can’t afford that. So, what we wanted to talk about today, Urs, is.
4:30
AI, what else? Right? So nowadays, if we watch our LinkedIn feed, AI is the solution. And I’m still thinking about the question, but at least I know the solution. And I think in Prague, you announced the MPC, right? And I’m sure you had some very interesting learnings on the way. Yeah, that is why we were thinking like this could be a fun episode to talk to someone who is in business before it was cool automation and now sees the transformation to AI.
5:08
And yeah, so. For people out there, what is your take on what is hype, what is real on AI?
5:16
Oh, man.
5:20
Mute the mic and walks away.
5:22
We just got the gate, right out of the gate. I love it. I love it. I mean, what isn’t hype with AI right now, right? Literally, I mean, you can attach anything to AI. For heaven’s sakes, I can attach, you know, feeding my dog to AI. It’s, it’s, you know, it’s just, it’s, it’s that crazy.
5:39
The, you know, I think the, and that’s the problem, right? Is you talk to individuals and they say, AI, I want AI. Give me AI. It’s like, okay, well, what do you really want? You know, and we can’t articulate it anymore than that. It’s like saying, I want a network. Okay, well, that’s a good start, but what kind of network do you want?
5:57
Do you want a wireless network? Do you want a day-son of network? Do you want a SD-WAN network? Do you want a frame relay network?
6:03
Do you want a secure network?
6:06
Exactly. So, you know, I think it’s all hype right now. And we’re just now starting to get to a place where we’re starting to realize what are the actual applications that can be done with AI, but we still haven’t answered the question of should it be done with AI, right? Right now, it’s a lot of, it’s can and not a lot of should. And that’s kind of where we’re at with it right now. So, you know, to really be able to separate the hype from reality, that’s anyone’s best guess. I think it really starts with the same problem that’s always suffered from an automation standpoint, right?
6:43
So I can boil this down to you. You know, I’ve been doing an automation for a long time. You think about why people struggle with automation? Well, one of the first reasons why people struggle with automation is they can’t. Concisely define what it is they’re trying to do, right? They just do stuff. And I think we’re seeing the same thing with AI, right?
6:59
We’re seeing the scenario where it’s not that the technology is inherently bad or that the technology doesn’t work or that doesn’t provide value. It’s what do you want to do with it? And if you can’t define that, no technology is going to save you. And it all sighted that way.
7:12
So, for me, it feels like this script kitties we had when hacking was cool, right? So, like, there are the tools around and everyone plays around with it in the lab, but I don’t really see so much talk about what actually people are doing in production. So, I assume many people are coming now to Intention and they ask for AI readiness, or what is the criteria, or what are they asking for?
7:41
Yeah, so we actually do have an excuse me, shameless plug here, but we actually do have customers that are doing some real things with AI, some. And it kind of goes back to that same point I was just driving at: this idea that if you can concisely define what it is you want to do, you can definitely achieve some benefit from AI. There is no question about it. You know, the problem is being able to actually define that and trying to get through that. But that being said, you know, we are starting to see some real use cases show up, mostly in the operations side of the house. I don’t think there’s anybody out there using AI to configure a device. I know I sure as hell am not about to use AI to configure.
8:22
We know some people who are doing that in the lab.
8:27
No, no, but we love them.
8:29
We embrace them. We wish we were this all in on the bets, but there are people like the new Cisco. Like there was a lot of, I’m not going to, you know, I’m not going to be the guy sitting in Iceland going like, I am the first word, but a lot of people in the developing world see someone like Cisco push out an LLM for, for example, their DevNet one. There’s like, there’s an internal Cisco SKU one. I can embrace that. Love, you know, if I’m going through the Cisco SKU sheet, if I could get like a pre-thing or some suggestions, then I can follow up on amazing, right? I have this many customers, which box might be okay.
9:04
Great use of an LLM chatbot, right? If you’re in that role. Then there’s like the DevNet one where they’re like, hey, Mr. Steinsy, I see you have a CCIE. Would you come over here and help us train it by going thumbs up, thumbs down? I’m a bunch of input. And I’m like, no, thank you.
9:19
So you have a lot of people, though, that don’t have the history with receiving bad products, especially juniors in Iceland and anywhere else. But a lot of people in the learning world who maybe don’t get like the same meta. We’re very lucky. We get like a meta newspaper. When I say meta, I mean like just. Here is the lowdown. Here’s what the cool kids are doing newspaper that’s not really being shared really well, especially in India, China, that sort of thing.
9:46
And they see an LLM. And I was talking to one of the juniors in one of those other Slack channels that we hang out with. He thought, like, originally, he’d been using it for a few weeks and he’s had good results. And, you know, this guy was having good results, right? He wasn’t doing anything crazy. It was mostly just, you know, putting in existing conference, getting explications. Great use.
10:06
ISIS poor boarded. But when he was starting to use changes, that’s when things started to get a little weird. And the whole, should I be doing this? And instead of the AI going, oh, this is a great idea, you should definitely reconfigure the core router to be your personal loop ping test machine or whatever.
10:24
I could see it happening now. You’re chatting with Feem, your favorite chat bot, right? And you say, hey, packets aren’t flowing through this interface, fix it. And so the LLM reason, sure, IP permit any, any, done.
10:34
Done, right? Which was the first thing we did two or three, like in 2023 or 2020, like, you know, Christmas then, that was like the usual access list jokes, right? People would test access lists and then they get out nonsense. They’d be like, oh, he forgot the la-la port. Ha ha, this will never replace my job role. And we’re like, we’re on this, we’re on this technology curve, right? Like every year, the VC money will flow in.
11:00
We will get better stuff until the VC money dries up. But until then, the bubble will keep flowing.
11:06
It will. It absolutely will. It absolutely will. But. Fun. It’s fun stuff.
11:12
Where’s your differentiation? Do you see a line in the sand between my friend in Asia and us highly sophisticated European fellows going?
11:25
Well, I think here’s the thing, right? And we started with this idea that AI, right? But AI is this big, broad, massive beast of a thing. If we start to break this down into its discrete parts, its discrete technology parts, and we start looking at the discrete technology part and say, where can we start to achieve value? Where can we start to embrace some of this technology and understand it such that we can start to develop value-added use cases? Because the bottom line is, if you’re not adding value, what’s the point of doing, right? Just to do it?
11:55
Well, I got, you know, I got plenty of other stuff to do right now. I don’t need to do it just to do it. I want to do it because there’s a reason to do it. And I think that, you know, late last year, late last year? Yeah, late last year with the announcement of MCP, it really was the, it was a game changer for me personally. I’ll be honest with you. You know, I didn’t buy into a lot of the AI hype.
12:15
In fact, I dismissed it for a very, very long time. And I’m like, you know what? To hell with this AI stuff. It’s never coming into the network domain. We are way too stodgy. We are way too conservative. We are way too distrusting of the world.
12:27
It’s never coming. And then when I saw what happened with MCP, it really changed a lot of my viewpoints. And I started to realize, hey, there’s something here. There’s something good here. And that’s really kind of what started my AI journey. I really started to look and see, you know, how can we really, really start to achieve some value now?
12:45
So, what happened with MCP? Like, for someone coming who’s on the bus right now, who just has been in the fucking under a rock, right? Under a rock. What did MCB do? Like, I got REST APIs for my LLMs. Like, what did we get? Like, because.
13:06
Absolutely. Absolutely. So, what did MCP do? MCP did something extremely important and it created a standardized language, a protocol, as our audience would know. Most people won’t, but in our parlance, right, they created a protocol. It created a way for LLMs to speak to pretty much anything out there, whether it’s a file system, whether it’s an application, whether it’s a network device, right? We now have a standard language by which we can start to talk back and forth.
13:35
Okay, well, that’s correct. Fine. I’m still not ready to let my LLM go crazy and start configuring my router. I get it. I totally get it. But as we start to develop MCP tools, and MCP is, again, it’s a protocol. It’s a messaging system and it exposes three things: tools, resources, and prompts.
13:50
Never mind the last two because there’s no use case for that right now. But for tooling, there is a real use case because now I can start to put some guardrails around this thing a little bit. I can start to expose tooling in a way that I can tell the LM, hey, you can do. These few things on me. You can’t do everything, but you can do a couple of things, right? And that was really the light bulb moment for me: how we can start to leverage MCPPs. Here’s the problem with Rust interfaces.
14:19
There’s two big problems with Rust interfaces. Rust interfaces are great. I use them every day. Two big problems with them. One, you got to write copious amounts of code to do anything with them. Last time I checked, no one’s using curl to interface with an application for programming network devices. Can you do it?
14:34
Sure. Two, REST interfaces are all based on software-defined programmatic constructs, meaning that you got a whole bunch of objects out there that you tie together with different IDs. Well, what do humans not do very well? They don’t relate very well to IDs or identifiers, right? They relate to English names and English words. So with MCP, we now have the ability to abstract that in such a way that I can start to tie together all of these disseparate API endpoints so that they actually make sense in a tool. So it now produces a functional thing that adds value.
15:11
And that’s really the light bulb moment was for me for MC Pain.
15:15
All right. And, you know, just to throw a little wrench into that storyline. So, like, we’ve been seeing a couple of people come in with the, you know, now is the golden age of REST API plumbing, thanks to, you know, AI coding tools and that sort of thing, right? Being able to actually have, instead of 10 monkeys, you can have one monkey churning out basically, hey, I’m one of those monkeys, churning out basically, you know, both handling the communication of the business process side, which is usually where things go wrong. And then if we were doing standard automation, that’s usually pre-existing business flow wasn’t conducive to automation. Therefore, any automation attempt is going to need a reworking of a business process that had a lot of meat parts in between. Like you have to talk to John, and John isn’t in, so Tim is going to do it differently this time.
16:05
Those types of things are all around in a lot of business organizations, especially when you start looking around. And that means that you can actually have one guy start to churn out a lot of code and spend most of his time working on the business processes stuff and finding the breakpoints. And I’ve been seeing some people in Germany, a couple of German people do it where they’re like, you know, they have one or two people just dedicated to a business process and then churning out basically high, very high-level descriptors, and that gets handed off to the LLM prompt monkeys and they just go next, next finish, and you end up with, you know, you get VLAN or whatever your process is, right? Do you see that as a challenge, as an interesting option?
16:49
So it is. It’s, you know, but here’s the, here’s the key, right? And here’s where I feel like everyone’s going wrong with MCP. If all you’re doing is applying the concept of a Rust API to MCP, you’ve completely missed the point of MCP. And that’s what everyone’s doing. All right. I’m generalizing here, but here’s what I see when I look out across the industry, right?
17:08
I see all of these vendors and all of these tools and all of these REST APIs out there, right? And everyone’s taking that Rust API and they’re saying, hey, I just turned it an MCP server, go do something with it. It’s like, no, no, no, no, no, no, no. You missed the point. You missed the point if that’s how you’re thinking about MCP. MCP is designed so that you can actually put real use cases with guardrails in place so that your LLM can’t go freaking crazy, right? And that’s the whole point, right?
17:36
So if I can constrain my LLM to some degree and only give them certain things to do and don’t give them, you know, 864 API endpoints that it’s going to try and rationalize through and do all kinds of crazy crap with. If I can constrain that to two or three or four tools that maybe call 10 or 15 or 20 API points on the back end, now I’ve got something that I can actually start to work with. And it starts to allow me to remove a lot of that very rigid and brittle orchestration of those API endpoints because now I can start to get some fairly common usage patterns out of natural light.
18:16
Which is excellent, but I’m going to cut off Ursh here because he has Ansible pedigree. So we’re going to draw an analogy here. I mean, we used to see some stuff happening in the Ansible space where people would, you know, vendor A would take a REST API spec, right? And they’d auto-generate their Ansible code, creating a one-for-one, not creating really any value in the Ansible module that they’re presenting. We saw the same in Terraform, right? You know, they’ll pump out a one-for-one basically, this is the get VLAN. Then you’ll have to call this one.
18:43
Instead of getting like a single thing that’s been thought out, like, hey, this is the create VLAN thing. You just call that. Have a nice day. We got you, buddy. They make you call every one of the five, all five endpoints that you need to hit up. It’s right. I’m looking at you, Fordinet.
18:58
So. Again, this kind of echoes that where they just go, hey, we’re just going to punch in an MCP thing that’ll, you know, basically from the REST API spec or whatever it is we’re doing. And we’re not really consolidated, helping you, helping the end user consolidate creating the business workflows. They’re just, here’s the endpoints that we already had. We’re not going to put any work into this. Here’s the MCP. And of course, that’s going to crash and burn, right?
19:25
We’re going to end up in the exact same place we did in all of those tools ecosystems with 642 million different ways of doing things and nothing that can figure out a sane way to put them together to actually add value. And so what do we end up doing is instead of reducing our work, we actually add to our workload.
19:43
So, if I understand you correctly, I cannot use the open API spec and just generate my NCP, right? So, I need to think about things.
19:55
But you have three operations, maybe, right? Like, if it’s a three-endpoint thing, it might be okay.
20:01
If we think like. The LLM tool thing thingy that I get from the MCP, it’s more like a service catalog, right? And not more, not like endpoints where I can configure resources. Is that correct?
20:19
Yeah, that’s, I mean, I think that’s a great way to think about it because you’re absolutely right. You know, can you take an open API spec and turn it in an MC server? Absolutely. Is that like the dumbest thing in the world to do? Absolutely. It’s exactly what you’re saying, right? It’s start thinking about what you want to do and then create MCP in support of that.
20:40
And yeah, this is, this is, God, this has plagued our industry for so long. I just, I could run my head into a wall, right? And this, this predates MCP, it predates Ansible. I mean, we could take this all the way back to Yang Data Miles and everything. It’s like, we just don’t think about what it is we’re trying to do. We just do sometimes. And I think it’s, in a lot of respects, it’s that CLI mentality we keep seeing manifest itself in software constructs.
21:02
It’s like, wait a minute. No, this is supposed to make our life better. It’s supposed to make our life easier. Now just give me another way to call 649 different ways to configure a VLAN.
21:12
Right. Which, you know, but we did, I mean, to be nice to everyone, we did get some, like, we always come back to Uncle Russ White’s going like leaky abstractions, right? You know, with the Yang Dan model, okay, we can, maybe we can express abstract something. Oh, wait, the abstraction starts leaking immediately, and it turns out worse than the beginning element that we ended up with. Same thing with the Ansible module. Like, somebody has to perform the abstraction. Oh, wait, it doesn’t work for that one user over there.
21:39
It works for 99% of users, but not that guy over there. Oh, let’s start get the leaky extraction machine, and then we end up with the breakout and any effort to consolidate or abstract or you know, get nice things. I like nice things. It seems like every time we try to get nice things, we end up in the same mess. And somebody starts again, I’m looking at you for internet, tries to automate, you know, the generation of their code. Actually, everybody, a lot of them do it. Oh, God.
22:07
Cisco, oh, God, oh, God. So, like, in the Cisco ecosystem, like, depends on which product you’re on, like, which endpoint is using. Oh, God, gRPC on this box, and they’re using, you know, RESTConf on this one. And you’re going, like, why, why, why, God, make it stop. So, like, are we going to three years from now? Am I going to be going, oh, God, oh, God, with, let’s pick on someone else. Let’s pick on Juniper.
22:31
With the Juniper MCP model that’s, you know, 4,000 question endpoints, and he keeps selecting wrong things. And I’m starting to scream in the chat, threatening to blow up his virtual kids, right? You know, we’re. What do I mean?
22:47
But you bring up a good point because, you know, this has been a problem with the relationship between the community and the vendors, the networking vendors, for so very long, right? You know, the networking vendors for so long want to punt the operational issues to the end users, and the end users and the customers would be like, wait a minute, now I need help putting together a sane operational model. I would love to sit here and say, three years from now, this is all going to be cleared up because MCP is going to solve all these problems, but it’s not. I mean, I’ve been doing this for far too long. Gray hair. Far too long. I know the realities of it, but if we can make incremental improvements, I think that that’s a valid win.
23:26
The challenge is going to be, and you already see it even in the AI communities, right? MCP isn’t the only communications protocol out there that allows you to attach something to AI to an LLL, right? There are a number of other ones out there, and it’s just like, here we go again, right? It’s like, good heavens. But nevertheless, I think we have to try to do the best we can to make those incremental improvements where we can. And I think it starts with understanding what we keep coming back to really understanding what it is you’re trying to do. I love people who just do, but sometimes just doers create an awful lot of work for the rest of us who are actually trying to do things the right way or to bring added value to what we do.
24:07
What do you mean? Telling people’s managers that, you know, this magic box over here solves everything and, you know, there will be no headaches. That creates no work and no pain inside the industry at all. We should all be hype men. It’s all right. It’s okay. But just to wind it back a little bit, I mean, in comparison to the other tooling that we got, that stuff was deterministic.
24:27
I put in X, I get X plus one. As soon as I have these LLM components, especially when they’re connected to these are products. These are not technologies. These are products that we’re using most of the time, unless it’s some in-house stuff, in which case you’re able to maybe have a little bit more control of it. But let’s be honest. I don’t want to see the bill from most of the network AI status for ChatGTP token bills. Okay.
24:50
I’m sure they’re very large. Everybody’s running some version of it under the hood because the underlying infrastructure for this stuff just, you know, it doesn’t scale as an individual. You have to go with the cloud giants. So. We did abandon, we’re abandoning determinism by jumping on this particular boat, right? Versus Ansible, Terraform, all these other tools that we can use in conjunction with the MCP and the LLMs and all this good stuff, right? But those are deterministic.
25:16
Do you see any problem? You can see why the old networking nerd in you goes like, oh, I like determinism. I want, you know, I want repeatability. Give.
25:26
For sure. For sure. Absolutely. And, you know, I think this is, you know, if I’m, if I’m completely honest, you know, this is this is the one area where, you know, my technology geekdom, you know, interfaces with my need to have a paycheck. And in that particular case, right, this is where, you know, I can take something like MCP and I can still give an LLM its ability to reason through and figure out what it thinks it needs to do, but I can still keep determinism in place. Because in my case, I’m building it on top of, you know, Itential and I’ve still got static defined workflows that are allowing me to keep some of that deterministic behavior in place. Right.
26:06
And that’s really kind of the big advantage that I’ve got and kind of how I’m building it out and looking at it. I realize that doesn’t work for everybody. But I think that’s where you can start to see some of the best of both worlds. And I think that that’s really how I see it continuing to unfold is we’re going to see this whole new layer in the operational stack that is going to create a, this is my deterministic side. This is my rationalization side. And putting those two together is ultimately what it’s going to take to be successful with this technology.
26:42
That kind of makes sense now. If we’re going to extract some free consulting out of you, and we’re going to allow our listeners to enjoy this. So, let’s say that you were stuck in Iceland, for example, and you had five, let’s say there’s five links, fiber links going from the island. Very simple, right? You know, you have five BGP peerings. You know, you get to ignore all the boxes and all that. You have five links.
27:03
Very simple scenario. There’s a maintenance window on one of these links. You have to drain it, right? Let’s say that the budding young engineer comes to you and says, Hey, there’s an alert email here that’s not standardized. We have two options. We can talk to the people and standardize the message that comes from the maintenance people, and then we can drain our links during the drain period, watching out for time zone differences because time zones are a thing, especially when you sit in the middle of the Atlantic. It’s fun like that.
27:32
We’re always going to do it three hours before the thing. That way, no time zone issues ever if anybody screws up. Great. You solved that issue. Wonderful. You didn’t write that in blood. Awesome.
27:42
Now you have these five links. You’re going to get an email and you need to schedule this change. Do you put the LLM in between and then have it like give the LLM little knobs to work on? You know, hey, drain this link and then, you know, make it impossible for him to drain all the links at the same time. What are your thoughts, Peter? How would you implement it?
28:04
I think, yeah, you’re hitting on the exact right tone here. And that is, you know, if we go back to this idea of a deterministic side and a side that is attempting to rationalize what needs to be done. On the deterministic side, you’re building your guide roads. You’re putting your guide roads in place in terms of a set of bespoke operations, whatever they might be, whether they’re itensual workflows or Ansible playbooks or Terraform plans or however you want to build your deterministic operations. And then you let your MCP server expose those so that your LLM can reason across them. But because ultimately everything collapses down to something that is deterministic. That’s where you can put your checks and balances in place and make sure that you don’t allow the LM to go off and do something very, very stupid, like say, oh, okay, I’ll just drain everything because why not?
28:52
Right. Or I’ll drain everything to one link because that always works really well. You know, and I think that that’s how you’ll start to see these types of patterns unfold. I will go back to one point is that kind of where we started in that I don’t foresee us using this technology for doing any type of configuration for at least, at least the next 12 to 24 months minimum. It’s just our culture is too inherently suspicious of tooling for good reasons. Thank you very much, Cisco and CiscoWorks, for allowing it to do things right against infrastructure. I think where we’ll continue to see this is going to be in a lot of the over-the-top type things we do from networking operations standpoint.
29:36
Anything from config compliance to drifts to observability to alarm remediation, et cetera. That’s where we’re going to see, I think, the big win. But here’s one I’ll throw out that is very interesting and something I’ve been playing with a little bit using AI technology in the troubleshooting domain. This one’s a lot of fun. One of the things, if you think about troubleshooting a very large network, right? And I’m thinking very large now. Let’s think massive OSPF network.
30:03
If you start to feed the OSPF database into an LLM, it’s fascinating how quickly not only can it start to rationalize what potentially might be wrong, but start to potentially give you thoughts on how to optimize your networking domain because it’s not looking at routers and interfaces and configs. It’s actually literally looking at the database constructs that make up the LSDB. And that becomes a very interesting exercise where you can start to see some interesting benefits from AI technology as you apply it to networking.
30:33
That’s definitely a very strong, strong, yeah, that’s a really strong case for it. Bring us back to the island. Now, let’s say that we’ve implemented it and we got these five links, and two of them happen to be provider one, three of them happen to be provider two. Provider one schedules all their links for an outage, and that happens in a single MCP session. All good, it gets scheduled, we’re all right. And provider two schedules at the same time, and that’s a separate session. And we haven’t given the LLM access to the schedule.
31:02
We’ve given him the ability to schedule, but we haven’t given them the ability to read the scheduler. So he doesn’t see the other session that has been scheduled on the other two links. He only gets the knob that he knows he’s not going to drain both, all five links. He’s just going to drain two, and the other session was going to drain three. Can you see how these sorts of scenarios operationally? This is the barrel of snakes, you know, because we have like five million variations of these, you know, nightmare scenarios. Add VLAN, you know, X.
31:30
Insert ad VLAN X on the trunk port adventures, right? Do you see how that scenario? Like, is there a so like your the quick fix might be, you know, hey, always have the LLM, also look at the schedule, give him access to the read the schedules and make sure there’s not an overlap or something like that. But you can see like how the initial, you know, rollout, you’re gonna hit something like that. Where, you know, having a monkey, a human in the loop, of course, would protect you from something like this. Having a standalone session-based LLM, but the human would know it because he has access to the schedule, the schedule and would see it there. So if you give the same access to the LLM, in theory, we should be good, right?
32:09
Should be, should be. You can also, you can also build certain checks right into MCP to do some of those checks as well. Saying, you know, you cannot, you know, you know how to drain it. You know, you should be able to drain it now, but you need to go check these 732 data points before you actually perform that task, right? And that’s the other big thing when we at least use software to do this, right? Humans can’t do that. They can’t look at all of those potential permutations, all those potential variables.
32:34
One of the other interesting things too, as we think about MCP technology, is we can also make use of elicitation to help some of that as well. So in elicitation, it basically allows the MCP to start asking questions back to the user or the agent before it actually starts to do things. Right, so now it could be like, hey, you’re asking me to do this. I’ve checked all of this. I’ve got some concerns. Just kind of like this exact same scenario you just pointed out, but instead of a human doing that, now it’s actually potentially MCP doing that, saying, Hey, I got some concerns. Are you sure this is really okay?
33:07
And we can actually do that with that CP technology.
33:09
So, these security guards, I’m not sure if I would call them security, but these guards, would you call them on the MCP level, or would you do them on the orchestration level, or on both?
33:24
Oh, I would do that. I mean, ultimately, you know, ultimately, security is done in layers. So you’re going to do it on both. But I think your primary gated point has got to be at your orchestration layer. It’s got to be in your deterministic side of your implementation, right? That has to be the gateway to your infrastructure. If that’s not the gateway to your infrastructure, you’re going to blow up all over the place very, very quickly.
33:45
It has to be. And if it’s not, you’ve got real problems.
33:49
So, I mean, like the real thing that we, so we’ve created, basically, let’s remove the MCP here and let’s put Bob. We take Bob, we put him. We already design are designing automations for Bob. We know how Bob works. We give Bob buttons. He presses buttons. We replace that with an MCP connection to an LLM.
34:06
So really, the only thing we’ve solved there is Bob’s paycheck. I mean, that’s like the trillion-dollar Silicon Valley venture fund thing. Like the problem they’re trying to solve is wages at the end of the day, right? Like, let’s just, you know, look around, look to your left, look to your right. That the only thing, you know, the working class has going for them on this one is that technology is rife with bugs, troubles, and horrors. And that’s the only thing you got going for you, kid. You’ve got $800 billion a year working against you, trying to get rid of you, whatever it is that you’re doing.
34:42
In many domains, not just networking, of course. And of course, the robots are coming, right? We’re getting the robot arm. It’s going to be great, right? We’re going to have little, oh, God, this is awful. And we got the killer drones. It all comes back to military contracts and killer drones.
34:56
I’m sorry, everyone. Like, that’s why this stuff gets built. It’s killer drones. So, optimistic side, Pete, like, what are you excited about, Peter? Like, what gets you going about this stuff?
35:08
Man, what am I excited about? So much I’m excited about. I’m so excited about the fact that, you know, the one thing we can start to do is we can start to take a lot of these mundane operations through this technology. We can turn it over to software and agents and say, you know what, just go do this for me. Really stupid, really stupid example, right? But it’s still something that everybody, not everybody, so many organizations struggle with backing up the damn config from the box, right? I could turn that over to an agent and say, go back up the config every 30 minutes or whatever the case is, right?
35:40
And I don’t have to worry about it anymore. It just happens. And I didn’t have to, you know, write a bunch of code to do that. So, you know, it really gives us the ability to start to turn over a lot of this mundane operation. The other thing I’m really excited about is I think there’s another untapped potential here that we haven’t really started to explore yet. And that is we keep talking about MCP and AI technology in the context of working with an operational network, as we should, right? That’s what we have to do on our day-to-day.
36:09
What we haven’t started to do is start to explore how do we use the same technology to start to influence how we design and build networks, never mind operating them, right? How do we start to use a lot of this technology so that we can design better networks, design more efficient networks, design easier to manage networks, and then use that technology to help us flow into how we ultimately deploy them? And that’s a very big, untapped area that I’m really excited to continue to explore. And then, of course, the last part was, you know, this idea around the troubleshooting of I want to feed my Link State database. I want to feed in all my BGP communities and attributes because, you know, no companies let their BGP communities get out of control at all, right? Everyone’s got a very insane, never, not in the least.
36:55
It’s the internet. It’s the internet. It’s illegal to do bad stuff on the internet, dude. I mean, that would be illegal.
37:03
So, these are all areas that have me super excited about how AI technology I think can really influence what we do in networking.
37:11
All right. So, if you zoom out right now and we take the example of the box config backup, right? And we don’t even think about like, oh, the vendors will finally design software that doesn’t have bugs or whatever as security exploits. And we’re living in Halala land and everything’s happy. We just have these old boxes over here. There’s a lot of them in the world. We’re not going to create e-waste.
37:30
We’re going to unleash the LLMs and we’re going to have like processes for software updates or backups or whatever. Like, you know, it’s the base use case usually. It’s like we want to solve backups, backups, and upgrades. That’s what you know, backups first, and then they do the upgrades, right? And then they start figuring out that, you know, this network can only access with this loopback, and then they create something, something, something, something happens, right? And the story is probably you’ve seen the story a thousand times from your part of the world. And then there’s, you know, there’s other vendors who sell like, you know, pre-made box solutions, like the guys back box.
38:02
And there’s a lot of competition on that, those particular topics. And then it starts to expand to do the rest of the organization. Oh, I want to be able to do ordering and pre-process, you know, like connecting this to the service now. ServiceNow pops its head on and goes, Hey, I heard you want to do some automation. I have tables and stuff. Let’s go, kids, right? And it just keeps getting bigger and bigger.
38:23
But if we’re on the curve right now, you know, let’s say that we’re, you know, not hit the top of the trough of disillusionment and it’s going to keep going, right? I mean, the moment where we can just unleash the agentic mega bot that, you know, you just, hey, go hit up all the boxes and do the needful, right? And it hits up all the boxes. And the cost of doing that is less than, you know, burning 40 tons of oil that’s needed for the data center to actually execute that. I mean, at some point, it’s like, well, we kind of just have the bot interacting directly with like he has SSH keys and he goes through this proxy and he does his thing. He’s a good guy, you know, and he sends a Slack message at the end of the week. Like, he’s a good guy.
39:04
I mean, there is a scenario where things become might become spable enough and consistent enough to have that be the solution, perhaps.
39:16
Yeah, you know, I’ve perhaps, sure. I remember the same talk track around, you know, controller-based infrastructures. I remember the same talk track around, oh gosh, yeah, the list goes on and on. Yeah, we’ll get there, sure. We’ll get there.
39:34
Hopefully, I mean, maybe, hopefully. But do you want, then again, that’s the company that said, hey, determinism, you know, we’re going to leave you, abandon you at the fire station like, you know, unwanted red-headed child. We’re just going to leave you there and we’re going to embrace whatever happens. And we’ve already seen all the horror stories, right? And people who went in really early, like on the coding front, right? You know, startups who lose databases, that sort of nonsense, right? But then again, with a tight enough TACX control list and he can’t reload the box, I mean, what is he going to really do?
40:09
I mean, Cisco archive config on the old iOS 15, right? And the iS 12, they have an archive command. It’ll kind of roll back most of the time, right? Kind of, sort of. Depends on what you do, but you know, it can help bring back the link that you turned off, right? So, you know, if we do maybe if the bot if the LLM knows about it and is able to take care of it, but the problem is, of course, you can’t run your own LLM. You’re going to be using a product from someone who’s going to try and save money on the token count, right?
40:38
And we see this, I mean, we can’t really do the hot topic stuff, but there’s been some unfortunate incidents with people using LLMs and they’ve done some very unfortunate things, especially to themselves recently. And we can see that the prompt injections that are happening on the product side, Claude, ChatTP, I mean, it feels like my calculator is asking me to have a mental wellness day every time I’m trying to prompt it in the last three days. It’s been, oh, wow, these are really big numbers. Do you really need to multiply them together? Why did you go for a walk? And it’s like, what are you doing? No, I want you to multiply them together, you piece of.
41:13
So, I mean, we have that evolving thing where they keep adding more pre-prompts to these products, right? It doesn’t matter if you use the, you know, I want to create a biological weapon. No, no, no, you can’t do that. Okay, I’m okay with that to the guardrail. But when every other prompt is like me getting a suicide hotline number, and I’ve gotten two suicide hotline numbers, you know, wanting to kill a process, I wanted to kill a process, and it started to talk to me about a suicide hotline. This happened. That’s insane, right?
41:44
It is. I mean, you’re absolutely, you’re absolutely correct. And, you know, it really starts to bleed into this idea. And it just drives me absolutely crazy when I hear people talking about how LLM technology is going to evolve to a place. Sam Altman just said this, right? About how, you know, LLMs are going to be smarter than him in a year. And it’s like, my God, I sure as hell hope not.
42:06
You know, the reality is, is that LLMs are still token matching systems at the end of the day. They really are. But, you know, what is, you know, when you start thinking about this world of bots and agents and kind of what you were just kind of going through, what’s terrifying to me, and I’m going to kind of turn this a different way, but what terrifies me is. What happens when all these old farts like me say, you know what? I’m done with this industry. I’m going to go sit on the beach for the rest of my life, right? And you’ve got this whole generation of people coming up that have no clue how to actually configure or, God forbid, troubleshoot a network outage and the bot can’t figure it out or the agent can’t figure it out.
42:48
It’s like, what are you going to turn to? You know, the reality is, there are going to be outages. We know this and we know this empirically. There’s going to be outages on the network that you just can’t explain. We’ve seen it in the past. We’ll see it again, right? It’s just the nature of working with a distributed ephemeral system, which is what a network is.
43:06
You know, so I think that letting this thing get out of control is the worst thing that could potentially happen. We kind of went started to go that path with the whole scripts thing when strips started to get way out of control and we had to bring it all back to sanity. I think, you know, we have an opportunity not to let it happen in this case. And it’s going to be very imperative that that is how we proceed going forward.
43:26
What? I can’t keep TCLing myself into oblivion. I mean, come on.
43:30
No, absolutely we can. Absolutely you can.
43:34
All right.
43:35
So I’m waiting for the first day that an agent connects to a device and says, I’m tired of managing my SSH keys, so I’m just going to share it on telnet.
43:42
There you go. I mean, hey, he did the needful. You wanted a machine that makes forks. He destroyed the world, but he has a great fork making apparatus. I mean, it’s all forks now. Congratulations.
43:54
It’s all forks.
43:55
But we’re seeing a lot, at least, you know, some of the smartest people that I follow and some of the smartest people around are ditching the LLM hype train and moving research seems to be moving to other directions. It looks like we’re not, I mean, smart money looks to be on us not getting quote unquote AGI out of this LLM adventure. We’re going to get a magic token fixing machine, which is great. I like it for email. I really like email summaries that sound bland. I love it. So, you know, now we’re going to, and Urs, are you going to be out of a job soon, or like, you know, what are we doing?
44:30
Like, as an industry, as an industry, I believe we will not be out of the job. Because, yeah, as you said, in the worst case, someone has to be able to fix it by hand, right? Because the LLM will not have access, the agent will not have access. And I’m not worried about my generation. But I’m, yeah, I cannot say what I believe the next generation will face.
44:58
Haven’t we been complaining about the youth since the time of Socrates? Like, oh, the radio is going to ruin these books. These are going to ruin these books. They’re not out in the field plowing and then they’re listening to the radio and then they start listening to the TV. We might be suffering from grumpy old man sympathom. I mean, there is a small chance that we might be, you know.
45:18
Oh, I know no quite. I mean, I am a senior curmudgeon. I totally get that. I tell that to people all the time. But, you know, I think the, but that is the, that’s the reality, right? That’s, that’s the point of it, right? Is that, you know, we are going to need people who really still understand this technology for a very, very, very long time, if not forever.
45:39
You know, we’re not going to let LLMs build and red networks. That is for sure.
45:44
I think one thing we are doing is we are always fixing stuff with more complexity, right? So more encapsulation, more overlays.
45:54
Yeah. If you need something inside of EVPN, that’s what you’re saying. You want VXLAN inside of your VXLAN? We can do that. We can definitely do that.
46:02
I want to have more MTU mismatches.
46:05
There we go. I love IPSET.
46:10
So, you know, I think I, well, but I think, you know, at the end of the day, right, it really, and this is, I even used to say this in my days when I was at Ansible, and I say it again now around, you know, am I excited about AI technology? Obviously, you know, it’s fun, it’s new, it’s exciting. I, you know, can do some very unique things with it. But at the end of the day, it’s still just a tool. And that’s the way we’ve got to look at it, right? It’s not some magical mystical beast that’s going to come and change our world to a point where, you know, everything we know is now null and void. That’s not living in reality.
46:41
It’s a tool. And just like every tool that we have as engineers, we have to use it in a way that makes sense for our environment. Right. And I think that that’s where the hype gets way overplayed, right? Because you listen to the hype out there. And it’s like, you know, we’ve said it a few times even on this call, right? That the promise of what AI is going to do for us.
47:00
Oh, it’s going to, you know, it’s going to revolutionize how we build, run, and manage networks. No, it’s not. It’s another tool in the toolbag. And that’s how we have to look at it. That’s how we have to think about it.
47:10
Yeah, I think I’m all for being more efficient, doing less boilerplates, being smarter with doing stuff. I totally see the case with operations, with using knowledge graphs and MCPs with tools for working. But I come back to your first point that there are stuff we probably should not use AI for. And for me, I probably get some hate for this, but network automation, generating config, managing a box is not a hard problem. Making a backpack is not a hard problem. And I don’t think that just because we are too lazy to standardize our network and having a nice data model, we should use AI for as a shortcut to, in the end, will bite us in the ass.
48:05
Yeah, I mean, you’re absolutely right. That just kind of amplifies my point, right? It’s use the right tool for the right job. And the moment you stop doing that, that’s when you get yourself into trouble.
48:17
So, the question on everybody’s mind is: of course, you know, when you say right tool for the right job, I mean, you know, I’ve seen some horror shows in the past two or three years where somebody does an architecture word document, right? And he just blows up the character counts. And I know I’m reading LLM nonsense, and we could have just had a 20-minute conversation about the merits of marrying VXLAN and how we’re going to peer the fabric to blah, blah, blah. But instead, I’m presented with a wall of text, which I’m going to dump in an LLM and get a summary from because I’m not going to read that thing. And we end up basically blowing small idea, big text, big text, back to small idea. And then we lose some information in the translation. You know, we see this a lot in documentation as well.
49:04
I mean, you know, there’s a lot of, as soon as I start seeing a lot of emojis in the README, I go, what the F is this? You know, the worst offender that I’ve seen in the past year, and I actually cyber-bullied this person to oblivion, somebody basically took the Terraform office guide and claimed to have written a Terraform book at an Ansible book in the span of three months. And I was like, hmm, wow, what a prolific person. And then I started to read it and it was like, wow, this is the most bland take of the exact same thing that’s on. Like, he must have dumped just every single page of the manual or the documentation page and then just, you know, made a very terrible example. And that was the book. And it’s like, dude, I’m not angry at that person for doing that.
49:48
Okay, that’s that’s that’s it’s fine. You do you, but In a world where we are competing for finding good resources and making sure this, that, especially authors get paid and that sort of thing, right? The people who are actually writing quality books, putting in the biggest, most expensive business card they will ever make is the book, and they put it all in. And then you have this slop showing up. It kind of makes me go like, oh, dude, you’re filling the pipes. You’re filling the pipes with nonsense.
50:16
Why you do this, bro?
50:19
Indeed. Indeed. I mean, it is unfortunate, but it is the reality, right? And I think what makes it even more challenging is sometimes it’s not that obvious, right? Sometimes it’s not quite so obvious that AI is being used in that form or fashion. You know, I’m waiting for the day. I’ve yet to, I’ve not seen it yet, but I know it’s coming.
50:44
I’m waiting for the day where I go to someone’s blog site and they’re talking about some new way of designing a EVPN fabric, and you start reading through and you realize, wait a minute, this has never been tested. This has never been thought about. This is just some LLM spewing out some router configs and switch configs and saying, here, here you go. It’s coming. It will be if it’s not out there already.
51:04
I’ll send you that link. I’ll send you that blog post. You just got to be low enough on the Reddit, man.
51:12
I’m sure they are already out, right? And one big issue we have with AI or with LLMs is the copyright. Because copyright is defined for each country, right? And internet is.
51:28
Facebook doesn’t just pay the fine and then you’re okay. She just has to be able to pay the fine.
51:33
Yeah. So copyright and AI is a really difficult topic because a machine in most countries cannot own copyright. But I, as the tool user, I just provide a simple prompt. So the hard work was doing someone else. And in the LLM case, thousands of people probably. So who has the copyright now?
52:03
Oh, yeah. No doubt. No doubt.
52:07
The attention lawyers are going, telling Peter to shut up. I can hear it in his ear. Don’t condemn. Don’t get a quote. Don’t let them get a quote on you. HR, legal, I mean, PR is on the phone right now.
52:24
Nowadays, you’re in a good spot. You can just say, I never said that. Someone generated AI.
52:30
Ladies and gentlemen, Peter didn’t actually show up. Why would he come to our show? Now, this has all been AI, Peter, this entire time.
52:35
I was going to say, you know, this is all AI, right? I’m not real. Exactly. Exactly. It’s all AI.
52:41
I mean, you know, I’ve been making my own pocket booster podcasts for a while now that are shared internally, and they are very nice and very lovely. I’m sure they’re fantastic. Oh, they are fantastic. And let’s not go down that hole. So, Peter, what’s identical up to? You got anything big to show us at AutoCOM 4? Like besides, you know, MCCP stuff, right?
53:09
And what else you got?
53:11
Well, it’ll be, yeah, it’ll definitely be an evolution of, you know, what we’re already doing with MCP, you know, continuing down the path of, you know, how do we bring, how do we bring AI technology in a sane way, in a consumable way that adds value, but doesn’t undermine the platform. You know, I’m a big believer in, you know, kind of we’ve been talking this whole kind of session about, you know, having a reasoned a reasoned side and a deterministic side and and we’ll continue to do that you know as we go forward because i’m i’m a big believer in that you’ve got to keep that deterministic component in play and that thing cannot be ai enabled what scares the the crap out of me is the number of vendors that are shoving ai technology into the core of their product it’s like good luck with that um so we’ll continue to advance our our deterministic side as a deterministic platform that you can continue to build static workflows with and we’ll continue to be making a number of innovations on the ai side a lot of that will start to show up at autocon and as we roll into next year so i’m super excited about what’s coming there
54:09
Very nice. So, like, what happened to the 30-day trial thing on Atential? Because you’re back to the demo thing. It’s time to roast this guy. Let’s get on the frim tour. Why, why did somebody take away the 30-day easy trial, dude? Why do you do this?
54:25
There is still a 30-day easy trial. There’s just no longer a sign-up form for it.
54:32
Yeah, yeah. So there’s not one. That’s how that works. You take it away from the button. It’s not a thing anymore, Peter.
54:40
It’s not a thing anymore. It’s not a thing anymore. So no, it is definitely still out there. You know, we’ve been in the process of retooling our cloud quite a bit over the last year or so, and there’s still a lot more to come around it. You know, we’re kind of approaching our 2.0 of our cloud SaaS offering. And so it has to be done in phases, obviously, to make it reality. So that’s.
55:06
That’s pretty good, but we’re getting to the hardest part of the show, Peter. We’re getting to the controversial opinion part, and I hope you bought it. Oh, I love that part. Oh, I hope you brought one because, you know, we’ve gotten some mid ones, like, you know, I think we should all be friends type stuff. So if you have any controversial opinions, now is the time to share them, Peter.
55:27
Well, so, so, so, gosh, which one to choose from? So many. Which ones don’t get me in trouble? Or do get me in trouble? Maybe that’s the better way to look at it. No, you know, I think I’ve touched on it a little bit already, and I’ll say it in a very blunt way. You know, everyone’s doing MCP wrong.
55:45
I truly believe that. It’s a matter of fact. Everybody’s doing it wrong. Everyone’s approaching it wrong. Everyone’s thinking about it wrong. You know, we need to reset ourselves. We need to think about it a different way.
55:56
We need to think about it in ways that are going to move the industry forward, not continue down this same path of just simply throwing up a REST API into MCP and calling it a day. We are, by nature, the laziest industry, I think, in IT. Hands down. I don’t know. Maybe the system guys are a little bit. Maybe the system guys are a little bit.
56:14
The VM guys are lazy, though. So, you know, just saying.
56:17
Yeah, but we’re right there, right? We don’t think about anything. We just, you know, we just attempt to, you know, if I had a nickel for every time I watched the network energy, just get onto a C line, just start punching out commands, and they can’t even tell you what the damn commands. Doing. Like, all I know is if I type this command, something happens over here and it’s all working magically. Right. Drives me absolutely.
56:37
Hire that guy.
56:38
Hire that guy. That’s amazing. He gets results, Peter. He’s not there writing a dissertation. He’s out there shoveling. He’s a shovel. You need shovels, Peter.
56:50
We do need shovels. We also need everyone. You know, I’m sure it’s coming. Everyone’s all excited to get in line to beta test CCIEAI, right? It’s got to be coming.
57:00
It is. I mean, I’ve seen the panel. I’ve seen it do a thing. I’ve seen ups and downs. I’m glad somebody’s working on it. I hope it doesn’t get abandoned like a lot of C products where they just, you know. Float along.
57:13
I’m looking at you, poor DevNet, where they sliced the head count to basically like, I think it’s less than seven people right now. I don’t know how, like, they took a big knife to DevNet. So I like anything they’re going to do on the learning side, they’ve been a golden bastion for the last 20 years of learning, right? Like Cisco has been like, you know, if you have a C something, it’s a CC something. It’s good. But they still have a very solid core. But however, every year we’re a little worried about, you know, that the Excel people and the bean counters are not seeing the background.
57:50
So that’s a pretty, but that’s a good and popular opinion. And now we’re now in the latest and greatest section of the Network Automatic show, which we’ve not told anyone about. It’s called The Hot Seat. Are you ready for the hot seat, Peter? Let’s do it. Let’s do it. We put the intentional links in the show notes.
58:07
So, you know, now we’re allowed to beat them up a little. So, buddy, Torero.dev, your internal people don’t. I get a lot of, you know, as soon as I get your people drunk, some of them will say, like, oh, God, it’s a thing. Stein, I’m sorry. That’s basically how they start with it. I’m sorry. I’m sorry.
58:26
That’s a little bit. Peter, like, what are you doing, buddy? What are you doing? You’re trying to lead a bunch of kids into the, you know, you’re showing up in a van going like, hey, kid, it’s free. Don’t worry. Just put it in your stack and we’ll update it four times a year. Picky promise.
58:39
You don’t get to see the code. Come on, get in here. It’s a rough, it’s a rough deal. It’s beautiful on paper. It’s commercial. It has enterprise free open source without being open. Because we can give the credit to Napbox and those guys for going open core, right?
58:56
And then they do the sprinkle on top and we go, all right, cool. Cloud has this feature. It’s cool. We’re all cool. We’re all friends. When you do this stuff, it looks like you’re just trying to lure some kids into the back of the van, Peter. So like, what’s going on?
59:11
I’m not saying that we got some people on milk cartons because of you, but what’s going on there, Peter?
59:17
Understood. Understood. Understood. So, you know, it continues to be something that certainly we talk about internally about fully open sourcing the code. The reality is this, is that, you know, I’ve been doing this for a very, very long time. And one of the biggest challenges that I saw in the industry is that we’ve got a bunch of network engineers that are falling into this trap of, I’m going to become a software developer. I’m going to become a software engineer.
59:41
And really what I’m saying is I’m going to write a 13-line script that does something. What they’re not going to do is they’re not going to start building a full stack application to be able to run that script. They’re not going to deal with security and logging and authentication and library management and dependency management, et cetera, et cetera, et cetera. That was really the premise behind Twero. And it continues to be to this day: how can I make it easier for that individual to take that particular script and actually get functional value in a way that you can attach it to your production infrastructure? I totally concede your point. You’re right.
1:00:13
There is challenges with the fact that it is not open source today. And we are, I’m looking at different ways that we can get it there with some adjustments in how the code ultimately is potentially delivered or how the code is absolutely delivered. But yeah, it is to this day still a commercially closed software code base. What we didn’t want to do, let me say it this way. Let me wrap on this point. What we didn’t want to do is we didn’t want to do the Hashi Court dance. We didn’t want to open source it and then turn around and say, now that you’re hooked on it, now that you’ve got the crack, right?
1:00:50
Now we’re going to take it away from you. And now you’re going to have to go into rehab and you’re going to have to go in. And, you know, that’s what we didn’t want to do. Maybe we overcorrected a little too hard that way, but that was the original thought behind it.
1:01:02
Do I just beat up on you have to beat up on him as well? Because if I just do it, I’m just a meanie, you know? So, you know, rough him up about how he’s kind of a distraction in the space, and you could do it with XYZ tools. Do those points, beat him up.
1:01:15
No, I’m already a nice guy, so I cannot beat up.
1:01:19
God damn it.
1:01:22
You are the man for the show. I will not say that.
1:01:25
No, no, but Peter, like, you know, it’s almost a distraction. Like, the value from this thing is almost a distraction level. Like, you know, you could just point the kids to Streamlit that’s a side product, like something that gets paid for by the LLM data cloud people, right, as a side project. Like, any spun off from there, we can, you know, the open source hippie kids could get a lot of value and money out of those losers because, you know, when those businesses fail, right, the code is still open source and we’ll be able to do something with it. Versus, I’m, and of course, with the HashiCorp dance, we had the tofu, right? You know, we had the tofu reaction, which is a, you know, that’s a valid thing. Like, you can’t always just go fork and then you can try your best to murder the company that buys it.
1:02:06
Nobody can kill IBM, though. Like, IBM is gonna, he’s gonna be around.
1:02:10
We’ve been trying for years. We’ve been trying for many years to kill IBM.
1:02:14
Nobody got fired for buying IBM. I love IBM, by the way. I love you guys. Love you guys. I want your money. Please. I’m just joking.
1:02:20
You guys are too big, though. You’re almost too big. I’m sorry. We’ll never get anyone from AshiCorp on the show ever. But like, you know, just the final sections of the ribbing is that it’s almost a distraction. I mean, the value that, or, you know, the thing that you’re doing there for the person. It’s praying.
1:02:40
It almost like if you want to be really mean to you guys, and again, listener, listen to what Peter said. It’s a perfectly reasonable thing for a company to have that view. What I’m going to say is that representing the Richard Stallman people who eat cheese from their toes type people like myself sometimes is that it’s almost a distraction and you’re almost preying on the ignorance of the engineer by you know being big at the conference having the budget to print the t-shirt you know and then what you’re delivering is maybe what like 20,000 lines of code or something that does XYZ it’s you know CLI like there are the alternative you could go the Torera route because you saw it at the conference and it looks you know they have the light or you could go this way and you’re gonna get the same value just with XYZ tooling just because it’s not marked as network engineering and that’s kind of preying on the ignorance of the user which the Richard Stallman has a problem but the the enterprise dienzy has no problem with and just goes yeah that’s that’s the cost of doing business
1:03:40
Well, and now you understand, now you understand very much the psychosis I go through in my own head because I have the same problem. I have the same battle. I have the same battle. There is no question about it. Kids got to eat. The dog has to have his toys. I mean, God forbid my dog doesn’t get his toys.
1:04:00
It’s a problem. It’s a real problem. But it is a real thing. When you say it’s a distraction, though, I think that that underscores part of the problem that we have in the industry to a degree, right? Because people do things without thinking. No one says go do Truro. No one says you have to use Truero.
1:04:20
It’s there if you want it. If you don’t use it, that’s fine. I’m not saying you have to do it. It’s a concept. It is a…
1:04:31
But that’s also why it’s separately, that’s why it’s so separate from everything else Itential does is was for that very reason so that it was well understood that it is a separate entity in and of itself So when you’re gonna open source some real good shit I mean you’re like give us that JSON comparison shit you got there come on I want C servers open I want P servers all open source wow and your REST API docs are open blah blah blah give me something give me something nice Peter I know you get the money I know I know your revenue all right your your people leak like civ information as soon as they’ve had 10 beers okay or some some pills that I put in the drinks okay like give me something nice they definitely have the money because they on the booth have all the same shoes I know what you’re paying for those booths dude I know what you’re paying with that to booth and the shoes okay like honestly free Freemasons they don’t wear matching shoes okay itential wears matching shoes that’s cult stuff Peter you guys are a cult
1:05:29
Yes, we are. Yes, we are.
1:05:31
And then you have your top-level executives wearing Italian lawfullers, and the rest of your team is wearing, God knows, what Nikes, all-matching Nikes. It’s a weird setup.
1:05:43
At least you couldn’t see what was going on underneath all of that.
1:05:46
Well, I would like to, and we would also like our, where do we get our itential shoes? That’s what we really want to know. Oh, that’s what I was doing. They look comfy. They look comfy. Let’s just try and get it. Spoiler alert.
1:05:59
Spoiler alert. They’re not.
1:06:00
Oh, dear. That makes us want them even more as a collector’s item. Now, to quote a direct quote from one of your employees that Peter, in his heart, he’s OSS/slash year of the Linux desktop. Do you think? Now, that could have been your controversial opinion, would have been year of the Linux desktop. You would have gotten so.
1:06:20
Oh, you mean that’s not just common fact? That’s not an opinion. That’s just common fact.
1:06:23
Right. Right.
1:06:25
I mean.
1:06:26
Right. That’ll be the. Okay. I’m just going to close my MacBook and let’s not do that. What are you running? Oh, that could be a segment. Like, what are you running?
1:06:36
Peter, what are you running? Are you inside of a big org? So you got to be running Microsoft. I mean, just for IT or.
1:06:42
Uh, we do. Yes, the Oregon is Microsoft, of course, but um, I run no Microsoft apps. I run Fedora. I’ve been running Fedora since FC.
1:06:50
Holy shit, we got a live one. Urs, we got one freaks. Oh, you got a freak on the show. That’s the headline.
1:06:57
Oh, it’s it’s a great world. It’s it’s uh, I run, I actually run silver blue spin, which means my operating system is immutable. Once you go immutable on your operating system, you never go back.
1:07:05
Bro, you need to calm down, you hippy Jesus Christ, dude. That’s what you’re doing. He’s he’s eating his toe fungus. He’s Richard Stallman. I knew it. I freaking knew it.
1:07:16
I was always waiting for uh Nick’s OS. So, how is it?
1:07:22
Yeah.
1:07:23
I’m getting now. Yeah, no, it’s coming in live on YouTube. Richard Stallman is a problematic character. And that Steinsi did not know it when he was making those comments, but he has an AI thing that’s saying that. Yeah, yeah, yeah. The AI transcriptor just said that. So what I’m going to do is now I’m going to cut out and then we can cut back in.
1:07:41
So I’m going to have to edit. Dear Stein and Edity, I’m so sorry for that. We’re going to have to cut that out because he has some rape allegations on him. This is not good. This is not good. This is not. I did not know that.
1:07:52
All right. We’re going to go with the Linus Torwell treadmill. All right. Put that back in. So he’s a Linus Torwell treadmiller.
1:08:03
That is sadly true. It is. I’ll take that one. I’ll accept that one. I will own that one. It is absolutely a true statement. I have lived the Linux world since SC4.
1:08:13
I don’t know how to use Windows. I do not know how to use Mac. I only know how to use Fedora. And I run an immutable operating system. And man, I’m telling you, once you go immutable, you don’t go back. You just don’t.
1:08:25
Peter, I assume you have kids, right? And dog. I do not. Nothing like that. No, we have a dog. All right. Now, imagine that you have like five kids right now, or you’re Steinsi.
1:08:37
You have two kids. What operating system should you be teaching? Like, I am conflicted right now. I’m trying to pick an OS for them to have, for some to have on their first computer. Oh, the watch thinks I fell over. I’m throwing that 10 of hand movements. If you had a niece or nephew, if you’re me, do I just put him in the Linux thing and when he comes to school, he’s going to get beat up because he doesn’t know how to press the little Windows bar button and he probably will give up on computers.
1:09:04
Or do I go with the flow? That’s what the listener really wants to know.
1:09:08
Split the difference and go Chrome OS.
1:09:10
Oh, Jesus Christ. Ladies and gentlemen.
1:09:18
We have derailed.
1:09:19
The call has come. We are completely derailed. Well, Peter, thank you so much for coming on. Did we have any other access we wanted to grind for the listeners before HR takes you away in handcuffs?
1:09:32
I think we’ve got them all. I think we’ve got them all.
1:09:34
We got them all. Urs, anything you want to leave the listener with?
1:09:38
No, I think we covered a lot. It was interesting.
1:09:42
Thank you. Dear listeners, one of our, we appear to have a thousand recurring listeners. I don’t know who you are or what’s wrong with you where you feel a need to listen to this show, but thank you for joining us. And we’re going to pack up this thing. Again, Peter, thanks for coming on the show. Again, anything you want to plug before we leave?
1:10:04
I mean, no, just, you know, thanks for having me first and foremost. It’s been an absolute blast being here. I love having these conversations. You know, if you’re going to make it out to AutoCON, come see me. Come say hi. Introduce yourself if I haven’t met you already. Honestly, I don’t bite.
1:10:18
Well, I don’t bite at first. No, it’s all good. Other than that, feel free to reach out to me anytime. You’ll find me on the interwebs at PrivateIP on X and Mastodon. And of course, Spirata at LinkedIn.
1:10:33
Perfect. We’ll have all those in the show notes, dear listener. So that concludes this episode. I’m going to have to edit some stuff out of it because I said some no-no words. But other than that, thank you for listening and we’ll see you again soon.