30% of engineers are already building with AI. The other 70% aren’t competing with peers anymore – they’re competing with automation that never sleeps.
If you’ve been asking “okay, but how do I actually start?” – this is the answer. John Capobianco, presents live at the inaugural TORNOG 1 event and walks through his full journey: the first clumsy scripts, the early GPT experiments, the agent frameworks that worked (and the ones that didn’t), the security scares, the real production use cases, the honest failures, and the breakthroughs that changed everything.
No theory. No hype. Just a practical adoption path – local-first, CLI-first, and something you can start tonight.
What You’ll Learn:
- A clear mental model of where AI fits in network engineering – from CLI workflows you already know, to local private models, to cloud-scale agents, to MCP-connected orchestration.
- A realistic adoption path that starts with what’s on your laptop tonight – Ollama, LM Studio, Foundry Local. Privacy by default. No approval tickets required to get moving.
- A personal AI roadmap you can start this week – including John’s specific challenge: add the GitHub spec kit to Claude Code and run a spec-driven development project against a real network use case.
- A grounded view of agents vs. MCP vs. RAG vs. spec-driven development – what each one actually is, when it was born, and where it fits in a production stack.
- An honest take on NetClaw, OpenClaude, and the open ecosystem – what’s ready, what’s lab-only, and why the gap between hyperscaler models and local open models is narrowing.
Video Notes
(So you can skip ahead, if you want.)
00:00 Introduction
01:42 The train is still at the station
03:32 Start at the CLI – build on existing muscle memory
04:20 Local first: Ollama, LM Studio, Foundry Local
07:41 The pyATS + GPT moment that changed everything
11:39 Agents as digital co-workers
12:48 Paid cloud models – when $20/month makes sense
15:42 Autonomous browsing and self-healing code
17:25 Retrieval Augmented Generation – why RAG isn’t dead
20:24 Model Context Protocol, demystified
23:10 Composable agents with MCP servers
25:06 NetClaw, OpenClaude, and the open community
27:38 Spec-Driven Development – the next evolution
30:37 Q&A and joining the VibeOps ForumView Transcript
Announcer • 00:08
Okay, our next speaker is a little bit of a celebrity out there in the community, LinkedIn and Autocon and Nanog. He’s gracing us coming hailing from Ottawa. The talk is called From CLI to GPT, a guide for AIOps Journey. John Cabalbianco is well known in our community, and he’s the head of AI and developer relations at Itential. So give him a warm applause and let’s see what he’s got for us today.
John Capobianco • 00:41
All right, we’ll try this. Hi, everyone. Thanks for having me. This is not some new AI language. I don’t know what’s happened here with the font. Don’t worry, that’s not going to be part of the quiz. So, who feels like this?
John Capobianco • 00:54
Every day they log in, and there’s some new AI thing, some new language, some rag, some new terminology, some new acronym. That’s what today’s going to be about. There’s not going to be any code, there’s not going to be any demos, it’s going to be very informal conversation. A little bit about myself, and hopefully, to give you an understanding on why I’m not going through this, is because I’ve encountered these linearly, where some of you have maybe Are taking them all on together because your interest has shifted to AI or because your leadership has told you to start using AI, or you just generally see it and want to use it and incorporate it. I’ve happened to have the advantage that I started very early. So I got my ChatGPT account as one of the 1st million users in November of 2022.
John Capobianco • 01:42
That’ll be four years at the end of this year. So some of you have weeks, days, a few months. The other thing I want to mention is that the train is still relatively close to the station. I think it’s started to leave the station and it’s starting to gather some momentum, but I think you as humans can still catch up to that train and get a spot and a seat on the train. If we go in order, you know, ChatGPT, it’s less than four years old. Retrieval augmented generation, I’ll talk about that, a technique that can augment your generation of the outcome. Reduce hallucinations, it’s less than three years old.
John Capobianco • 02:20
Model context protocol is about 18 months old. A new technique I’m going to talk about today, spec-driven development, is only six or seven months old. So there’s less material probably about all of this versus, say, a protocol like BGP or OSPF. And I hate to crash this amazing networking party with my AI slop, but here I am. So let’s have a little bit of fun with this. So it feels like too many tools, too many commands, too much context to hold all at once before this workflow clicks. I’ve seen it, I’ve experienced it with people that have gone from maybe being a skeptic of AI or not believing in it at all.
John Capobianco • 02:58
Some people outright negative outlook on AI to suddenly the bit flips in their head, right? And they do something useful with it. Or they see a video and it finally resonates with them. Or they apply it a little bit more than just why is the sky blue type questions in a GUI with ChatGPT or something, right? Once you get below that surface level, sometimes the bit flips and people become enthusiastic about it. They want to learn more. They want to build digital coworkers.
John Capobianco • 03:34
I recommend we all start at the CLI. Most of us in this room, I assume, are network-related since it’s a networking event and we’ve all had time on a command line interface. I would start with the CLI and build on your existing muscle memory. Start with familiar CLI workflows to minimize the friction and be productive. Integrate your AI incrementally. Add the tools step by step to empower your developers without the steep learning curve. Now, what I mean by a CLI is anyone using Claude Code.
John Capobianco • 04:06
Okay, so about 30% of the room. What about Gemini CLI? A few other people? Anyone using Codex? So I think everyone is using one of those three, it looks like in the room, right? I would start there. The other thing is, and this is meant to represent local, I would start with local 1st with a private offline framework.
John Capobianco • 04:34
So who here has access to an approved sanctioned AI through their employer? How many of you are using it daily? Wonderful. Good news. That’s awesome. Is anyone in an industry or working with companies that just cannot use the cloud or it has to be air-gapped and that’s what’s holding them back? Some of us?
John Capobianco • 05:00
So, same at home. There’s cost, there’s privacy, there’s how do we get started. I’m going to recommend three different frameworks. And this is privacy by default. I know a lot of us are network engineers, and we just saw an incredible presentation about security. The other thing is, a lot of us like to be builders and prototype things. And it’s tough to spend the money on something that you’re in development of, right?
John Capobianco • 05:27
Well, there’s an opportunity to maybe use offline, free, local models that are about a seven or eight out of ten on quality. And then when you’re ready to do your alpha build, move to the paid model. And we’ll talk about paid models in a 2nd . There’s more than just offline models, there’s offline vector stores. So like ChromaDB, if you want to do RAG, and we’ll talk about RAG in a bit, is offline as private. So the idea is, by the way, if you have a laptop and you haven’t used any of these tools, before I’m done speaking today in the next 25 minutes, you could have installed OLAMA, pulled a model down locally, and started asking it questions in the next 20 minutes. It’s that fast, it’s that easy, it’s that frictionless.
John Capobianco • 06:13
You don’t need a big GPU. If you’ve got a Mac, you can do it. Another alternative to OLAMA is LM Studio, which is probably a little less friction if you’re not a CLI junkie. If you want a GUI, if you want a point-and-click experience, you can use LM Studio. Microsoft also has Foundry Local. So if you’re looking to move towards this idea, or if you’re in one of those industries that has to use offline models, maybe bring this forward to your company. A distributed framework, so you don’t need centralized GPUs in some data center.
John Capobianco • 06:48
And you enable your developers to actually augment themselves with AI, but it’s private and it’s offline and it’s local. I believe, I’m not up to date on my Microsoft GPOs and such, but this will likely integrate with Active Directory and things of that nature to do control and deploy this out through your GPU policies. Now with these three, they all come with a REST API. So yes, there’s a CLI and you can ask, you know, help me understand the OSI model or whatever, but they all have a REST API, meaning you can start writing Python code or other code and point it at your REST API. Now that’s how what onboarded me and what made me such an enthusiast about AI. To the point where I made a conscious decision, actually sitting down with my family to discuss, I’d like to pivot in my career. All right, so in December I got an API key for ChatGPT and I wired it up with Cisco PyETS.
John Capobianco • 07:48
Does anyone, is everyone familiar with Cisco PyETS? Not everyone? Okay. I’m so let down because it’s such an amazing network automation framework, and Cisco themselves do a poor job of advertising it. The good news is PyETS has made its way onto some of the new CCNP and CCIE automation exams. So, if you’re looking at your automation exams, you should start learning PyETS. But it’s a network automation framework that is Python.
John Capobianco • 08:17
And it’s not quite open source, 99% of it’s open source, and Cisco keeps a little bit of it proprietary, a little bit of the secret sauce. But it’s a network automation framework that lets you run show commands and you get parsed structured data back. I would suggest that’s the key to PyETS. Show IP interface brief, and instead of the tabular, semi-structured, I don’t know what you would call the CLI output, standard output, right? It’s JSON with keys and values and key-value pairs that you can test. I’m the co-author of the PyTS book. If anyone needs to know anything more about PyTS, just reach out to me offline.
John Capobianco • 09:00
But let’s say show interfaces. Does anyone in the room know if every interface on your network is healthy? Is anyone confident to say that they know that every interface is healthy or not? Well, I see a couple people pretty confident in the back, the Arista folks, not surprisingly. PyTS, you can do that in a job. And it’s about 19 to 22 tests depending on the platform. Input drops, output drops, CRC errors, the management description on the interface, whatever.
John Capobianco • 09:29
It’s about 19 tests. But now think of how many different interface types there are: virtual interfaces, physical interfaces, serial interfaces, point-to-point interfaces, Arista interfaces, Nokia interfaces, Juniper interfaces. Like this list of tests becomes bespoke and brittle and custom. And the 1st thing I did with AI of meaning to me. Was to take a step back and say, what if I just went interface by interface in a loop and asked the question of the AI, my prompt, is this interface healthy? And then attached the blob of JSON of the show interface’s output. And because it’s structured JSON, the AI doesn’t even care that it’s network data, doesn’t have to.
John Capobianco • 10:13
It’s structured, it’s keys and values, right? And it was like, No, this interface is not healthy. You have a high number of output drops on this port. You should do the following things to investigate. And I felt like Gandalf when he touches the one ring in The Lord of the Rings. And then he’s sitting by the fire just contemplating everything, right?
John Capobianco • 10:34
It had a profound impact on me because now I don’t have to maintain or write those 30 tests. I can run it universally against any platform. It was right. It was valid. And it wasn’t like a pass-fail test. It had insight. Here’s some reasons why this might be failing.
John Capobianco • 10:51
Here’s some show commands you could run to follow up. So then the code gets better. Okay, well, I’ll follow up. Not only, you know, is it not healthy, if it’s not healthy, run some suggested show commands that might help me understand why it’s not healthy. And the AI actually starts to become a digital colleague, a digital co-worker. It really is an abstraction layer, and I think it’s going to sit between us and our infrastructure pretty soon. I think that we are going to interface with agents that we build, and those agents that we build will be the interface into the technology.
John Capobianco • 11:26
Regardless of if it’s network or infrastructure, just it could be REST API, it could be MCP, it could be CLI. The AI doesn’t care. As long as we get at the credentials and the access to those management planes, we are going to be interfacing with these agents. Now, I think about who here would love to hire five juniors to help them every day? Who here, like, is that would that not be the dream? Right? Now, how do we do that in this labor shortage?
John Capobianco • 11:53
Get good staff, ramp them up quickly, know that they know their certs, right? Like, it’s hard to get good networking people. So, if we can build those agents using our domain-specific knowledge, now we have little agents that report to us. And you ask who’s responsible? We are responsible. Not the LLM provider, not the model itself, not the agent if things go wrong. The human has to be the shepherd of this digital employee.
John Capobianco • 12:24
This is an HR problem. It’s not a technology problem. The technology is solved. It’s more of an org chart problem. And how do I build these certain agents? Can I have a security agent, a compliance agent, config management agent? Can they all orchestrate and collaborate together?
John Capobianco • 12:40
Right now, I work for a company that has about 250 people. Now, that’s 2,500 employees if we all have 10 agents doing the work, doing hard things. So, in terms of public, does anyone pay for a provider? Is anyone paying 20 bucks a month for a Okay, okay. Fewer than I thought. So for $20 a month you get access to a digital expert in all fields, right, through either ChatGPT with the OpenAI models, Anthropic with the Claude models, Google with the Gemini models.
John Capobianco • 13:18
I’ll even mention X with the Grok models. $20 a month. I think I’ve spent more at the Starbucks this afternoon on lattes, right? And I’m getting thousands, if not tens of thousands, of dollars of return on that $20 a month. It’s right here in my pocket. Anything I need to know, anytime, I just start chatting with AI, right? Now that’s going to extend to being directly into my network.
John Capobianco • 13:43
Like I’ll talk about NetClaw in a little bit, but and the Vibe Ops forum, but right in Slack, right through Telegram, right through WhatsApp, we can talk to OpenClaw. Does anyone have an OpenClaw? Does anyone have a Mac mini running in OpenClaw? Cool. How fun is that, right? Now imagine extending that to your network. Right?
John Capobianco • 14:02
Hey, are all my routes good? How’s traffic between Atlanta and New York? On your phone and telegraph? Deployments, visualizations, source of truth. It’s up to 150 skills with about 60 MCP servers. So that small investment and the gap between how good the hyperscalers are and how good the open community is, there’s a really big gap there. And I would argue it’s worth the money.
John Capobianco • 14:30
But don’t, you know, we’ve heard shadow AI come up a little bit. Your best position is to use your company-sponsored approved AI. So there’s some of the clouds and the cost and stuff. Convenient access, the scale on demand, faster iteration. They’re also multimodal. The multimodal capabilities of the cloud models are much more further, meaning video, voice, image, audio. I think the keyboard and mouse days are likely numbered.
John Capobianco • 15:03
Maybe I’m wrong about that. But the Star Trek idea of just pressing a button and talking to your computer and getting an answer in human language is not that far off. Google has, no, excuse me, NVIDIA has released a model, open source, that the model can detect while you’re talking. So it’s no longer walkie-talkie, kind of push, talk, get an answer, wait, talk. You can actually both be talking in the same stream, and the model has gotten so good that it can interpret that you’re speaking while it’s answering. That’s cool, right? I hear the room go silent, like wow.
John Capobianco • 15:39
Some other things that have really excited me: autonomous browsing has come out. Has anyone plugged in computer use or something similar to any of their agents or their clawed code? So it can actually use your computer. So I built, so let me back up a step. I’m going to get to model context protocol in a 2nd , but I built this cool 3D maps drone game because Google came out with the Google Maps MCP and I wanted to kick the tires. What I found that I could do was have cloud code write browser-based code, and then it would launch the browser and test it itself and then fix itself, because it has access to F12 in the console from your browser. So you’re literally like totally hands-off, and it’s self-healing, self-iterating.
John Capobianco • 16:33
Like one pass, the drone was jittering, and it noticed that the drone had jitter, so it adjusted some JavaScript, relaunched it, and now the drone is smooth. Really neat, really neat stuff. So beyond the CLI, is anyone using Copilot in VS Code? Not enough. Okay, anyone not using VS Code in their life at all? Okay, so if you’re using VS Code, it’s fun because the Copilot is a sidebar on the right that takes in natural language and can be set to a certain model. But now with the terminal, the CLI, I can have clawed code in my terminal at the bottom of VS Code and ChatGPT 5.2 in the sidebar and actually have two AIs helping me, two agents working on the same problem.
John Capobianco • 17:22
Different models. It’s like a mixture of experts, right? So let’s talk about some of the other tools. So retrieval augmented generation, to me, we don’t hear a lot about rag anymore. And I don’t think it’s dead though. And I have some evidence that it’s not dead. But when context windows were small, 40, 96 tokens,
John Capobianco • 17:45
And when LLMs like ChatGPT 3.5 were hallucinating a lot, this idea of augmenting the generation of the output by retrieving data from an external source came in, known as RAG. So, RAG’s got this vector store, and inside the vector store you store embeddings. Now, embeddings are just. A 16-bit, I say just, and then I’m going to lay out some heavy math, but anyway, vectors are 16-bit floating-point numbers, and it’s sort of based on the distance between these three dimensions. So if you can imagine a globe that you could say, show me an icon and draw a line between everyone I went to high school with and me in the Mars Center right now. And maybe there’s some people in Toronto and the line’s short, some people are in Berlin, some people are in Australia. Sort of that idea, the closest matching vectors.
John Capobianco • 18:42
So when the prompt comes in, the AI looks up externally, retrieves, retrieval augmented, your source of truth, your PDFs, your Word documents, your JSON. Now, this works really well with something like PyETS and network automation, because JSON can be turned into embeddings and stored in the vector store. So I could do PyETS and populate the vector store and then ask things like, what’s my default route? If without the vector store, what’s my default route? It might say something like, here’s how to find a default route on a Cisco device or some other hallucination. But if I give it access to the routing table data in the vector store, it can actually answer deterministically, let’s say. Retrieve from real sources, add evidence to the prompt, produce grounded output.
John Capobianco • 19:34
Now, why I don’t think this is dead is because some investments being made by Google in this space. Google has released something called File Share, which I would say is rag as a service. You upload your PDF to the cloud, it gives you a URL, and you can start chatting with that PDF. You don’t have to worry about the embeddings or vectors or Chrome and DB or any of that friction of doing it manually. The other thing is, that’s really exciting, is that they’ve just introduced, call it 10 days ago, multimodal rag, where I can use video as a source, like a YouTube video, and it can take the data from the video stream and put it into the vector store. So, this recording that we’re having, we could upload it with this new multimodal rag and chat with the video audio stream. Okay, model context protocol.
John Capobianco • 20:26
Who’s heard of it but is sort of doesn’t know anything? Like, that’s all you’ve heard of the term, but you’re kind of like, what, you’re not really there yet? Let’s try to clarify this. So, everyone in the room understands protocol, right? SMTP is for what? Email. HTTP, web traffic.
John Capobianco • 20:48
FTP, file transfer protocol. Like, they’re all very self-explanatory, right? You ever try to explain a protocol to a normie and you just don’t know why they don’t get it? It’s literally called file transfer protocol. That’s all it does is transfer files. Say this backwards. It’s a protocol that provides context to a model.
John Capobianco • 21:09
That is it. It’s JSON RPC2 under the hood. It’s a transport mechanism. It has certain built-in things like discovery, tools discovery. So it’s client-server, like most of our protocols are. Web browser, web server. Email client, email server, right?
John Capobianco • 21:27
And like these other protocols, do you have to use Hotmail to send email? Right. Do you have to use, right, like with these protocols, I don’t have to have a Cisco phone to use VoIP. That’s the benefit of the protocol. It’s for humanity. It’s socialized. So I can use MCPs in Claude Desktop, AnniGravity, Cursor, Gemini CLI, my own bespoke agent that I write from scratch, Lang chains, Lang graphs, agent development toolkit.
John Capobianco • 21:59
It’s a protocol. It’s for humanity. And all it does is provide the context. It’s like my PyETS MCP. There’s a tool, and we decorate it as@MCP.tool in Python. PyETS run show command. And that show command returns the JSON from the parse show command.
John Capobianco • 22:19
Now, when I connect my cloud desktop to my PyTS MCP server, it says, here’s the 10 tools that I have. Run show commands, run show run, run show logging, do config. The server advertises these tools to the client, much like DHCP. It’s kind of similar to DHCP here, where it just gets an IP address, right? And you just have a pool of addresses. Well, instead, your pool is going to be a pool of tools. And your client’s going to say, oh, suddenly I can open up issues on GitHub through MCP through the GitHub issues MCP tool in the GitHub MCP server.
John Capobianco • 22:56
Now, they’re much like USB keys. You just snap them into your agent. And saying agent broadly, meaning cloud desktop, Gemini CLI, all these things we’ve talked about. I’ve got 10 minutes left. So on agents, I want to be very clear. Let me go a little bit further here. And if you have questions, like make note of your questions.
John Capobianco • 23:17
I’d like questions at the end. This is very much what it’s starting to look like. Your artificial intelligence agent, either one you’ve written or a commercial one that you’re using or co-pilot or whatever, and MCPs that are either local or remote, and you just snap them in, and they’re composable. So if this is MCP Netbox, PyETS, ServiceNow, Itential. Please go into Itential and find the workflow to provision a new switch. Create a new ServiceNow ticket to start this workflow. Go to Netbox and make sure you have an IP address from a pool that’s approved.
John Capobianco • 23:54
And use PyETS to push the config to the new device. Enter. That’s it. LLM will take that context window, realize the tools that it has, and do everything that I just said in a couple of seconds, right? It might even say, do you want me to send you a Slack message? You didn’t mention that. Oh yeah, sure, add Slack to it, right?
John Capobianco • 24:16
Because I have a Slack MCP that I’ve snapped in. There are Itential, I just wrote a blog for Itential. There are no fewer than 56 infrastructure MCPs that I would consider valuable. 56. In terms of source of truth, let’s just pick source of truth alone. Netbox, Nautobot, InfraHub, all have MCPs. There’s no reason not to have a source of truth in the modern world.
John Capobianco • 24:46
Because you can just say, there’s a CSV spreadsheet in the local folder. Please populate InfraHub with my sites. Enter. MCP takes care of the rest of it. Now let’s talk about the agents here. Has anyone heard of Netclaw? A couple of people, that is so wild.
John Capobianco • 25:09
That is really wild. So when OpenClaw came out, now OpenClaw is the best I’ve heard described, a Siri if it worked, is how someone described OpenClaw to me. But I’d been following this OpenClaw saga of this developer who releases something called Clawed with a W, so Anthropic makes him take it down. Then he re-releases it as OpenClaw. It’s got more stars on GitHub than Linux. Jensen Wong from NVIDIA mentions that OpenClaw might be the most important thing since the operating system. So of course, Mr.
John Capobianco • 25:43
Hype, I gotta get on this train, I gotta try it out, right? Gotta be the 1st to do it. I built one called NetClaw. But again, like everything else, my premise was simple. Could I attach PyETS as skills? So we’re drifting away from MCP a little bit. OpenClaw, it supports MCPs, but it’s more about markdown skills.
John Capobianco • 26:06
Everyone read Markdown? Has everyone worked with a Markdown file? It’s like HTML, but easier. It’s human-readable, tables, and asterisks and keys. So it made a PyTS skill. I was able to talk through Slack or Discord or any of those channels that OpenClaw allows, talk to my agent that could talk to my network through my MCP skill. It’s only grown over time.
John Capobianco • 26:31
It has almost 500 stars, over 100 forks. People are asking how to do it into production. I’m actually going to be adding some security enhancements to it. Please don’t use it in production. Please, please don’t. Please keep it in the lab for now. And why I say that is not because of my part of it.
John Capobianco • 26:48
I can’t speak for the OpenClaw part of it. I don’t know if there’s a problem with OpenClaw if it’s secure enough. I know when you install it, it says the very 1st thing OpenClaw says is this is not for production, this is for personal use. You have to press yes and accept it. So until that changes, I have a similar stance on Netclaw that I think it’s good for personal use, learning, labs. But it’s probably got a ways to go before it reaches production. I think everyone in this room can do it.
John Capobianco • 27:19
Spanning tree is a lot harder to learn than an AI agent. Similar to network automation. If the network automation is your goal and you haven’t automated your network yet, now is your time because you can describe it in open human language, your intent, and get the code back. Last brand new thing, since there’s always something new, has anyone heard of spec-driven development, SDD? Couple hands, sort of new on your radar. Has anyone ever heard of test-driven development, TDD? Okay, awesome.
John Capobianco • 27:51
So TDD, we take a failed test. We always start with a failed test. It’s interesting. The test fails on purpose. We write as minimal amount of code as possible to make that test pass. Or in us in the network world, make as minimal change to the network state as possible to make it pass. Right?
John Capobianco • 28:11
I’m testing for default routes. I don’t have a default route. The test should fail. I add the default route. Now the test passes, the most minimal amount of code. But we iterate. Now with spec-driven development, one, it’s all markdown, not all, majority of it is markdown files.
John Capobianco • 28:28
And it has, so GitHub has something called the spec kit. And this will all be available after the session. You don’t have to memorize all this. But you install the spec kit into Cloud Code. Now, Cloud Code has special/slash spec kit commands, the 1st of which is going to be constitution. And you make a constitution for the project, which is much like the American Constitution. It’s guardrails, governance, real high-level plans.
John Capobianco • 28:56
Maybe this should be a JavaScript project and not a Python project. Maybe you’re opinionated about the database or back end. Real high-level stuff in the Constitution. Then you actually build specs, specifications. Who here has ever done a requirement for the network? It’s all required. It’s all we do in our life is requirements, right?
John Capobianco • 29:16
Turning requirements into configs, right? Well, now we’re taking those requirements and we turn them into specs. And it’s all marked down. Anyway, a spec is the user stories. If anyone’s ever done any agile development, user stories, functional requirements, acceptance criteria, failure criteria. It’s all human-readable. You go through these steps.
John Capobianco • 29:38
Now there’s a lot of foreplay here. It’s not as instantly gratifying as vibe coding, but it’s kind of vibe coding matured, right? At the end of it, you do a/spec kit implement. That implement will generate your Ansible playbook, your PyETS job, your whatever. So if you’re interested in PyETS, you’re interested in AI, right? Add the spec kit to your clog code. This will be my final, and then we’ll get into questions.
John Capobianco • 30:02
Add the spec kit into your clog code and try to spec-driven development a PyETS project. Maybe something that gathers interface data and tests the interfaces like we had talked about. That’ll be my challenge to you. I’m on LinkedIn. Ask me if you need any help. I think it’s all going to click. I think that will be the zero to one bit moment in your head if you’re still waiting to find the spark to get excited about AI.
John Capobianco • 30:26
And with that, I’ll leave a few minutes for questions. So thank you. Any questions at all? Come on, like, seriously, you got me here right now. Don’t be shy. And also, this is the Vibe Ops forum. Oh, the okay.
John Capobianco • 30:46
Sorry, the barcode didn’t work.
Announcer • 30:48
Yeah, GitHub. Sorry about that.
John Capobianco • 30:49
We’ll get this out. Don’t worry about it. Vibe Ops Forum, Slackroom, beginners, enthusiasts, if you’ve got AI work you want to share. It’s become a great source of my news on how I keep up on AI. There’s a few people in this room that are participants and contributors to the space. If you want to come out of your intellectual AI closet, so to speak, because you’ve been embarrassed or shy, or people have called your work slop, this truly is an open, inclusive space for us to figure it out together, right? I don’t claim to be an expert in this.
John Capobianco • 31:20
I really don’t. I’m enthusiastic. I’m excited about it. I’ve been doing it maybe a little longer than most, so I feel I have a responsibility to try to lead people towards this new solution. But I hope to see you in the room, and I’ll be around at the social event. So make sure you drop by and say hi. Thanks.