Share this
Table of Contents
Twice in one day last week, I found myself asking a coworker the same question: “Can you give me your prompt?”
It stopped me in my tracks. Not because it was unusual – but because it felt so familiar. Years ago, early in my career, we had spreadsheets and repos. Everyone contributed. If someone automated a task, you’d ask them for their script. It was how we learned, how we got better, how we stayed efficient.
Now? I’m asking for prompts. Same energy, different tool.
The Stack Keeps Growing
If you’ve been in IT long enough, you’ve watched the essential skills list grow like a never-ending backlog. First it was Linux and command-line basics – the foundation everyone needed. Then came the scripting era: server admins learned Perl, Windows admins picked up PowerShell, and network engineers eventually made Python their thing.
Each generation had “the thing” you needed to learn to stay relevant. The pattern was always the same: initial resistance, then adoption, then it became table stakes. The new skills didn’t replace the old ones – they just added another layer to the stack. You still needed to know the fundamentals; you just had more tools in the toolbox.
I came from a software background – in college I had to do a fair bit of C++, Java, Matlab (yes, Matlab), and Python. When I started my networking career, the CLI dominated everything, and I honestly never thought I’d need to code again. Then reality set in. Creating reports, templatizing configs, building deliverable systems – it became clear pretty quickly that I was way too lazy to do all of that manually. Scripting wasn’t optional; it was survival.
Prompting: The New Universal Skill
AI is everywhere now. Not in the hyped-up, sci-fi sense – in the practical, everyday work sense. And if AI is the engine, prompting is the interface. It’s how you tell it what to do, how you get results, how you make it useful.
Here’s the thing: I was terrible at this when I started my AI journey. Like, embarrassingly bad. I’d get garbage results constantly, and I couldn’t figure out why. I didn’t really know what I was doing – I was just typing questions on a long session with no specific topic and hoping for the best.
That’s when it clicked: prompting isn’t just “asking questions nicely.” It’s a skill.
Just like writing a good Python script or building a solid Ansible playbook, there’s a right way and a wrong way to do it. The difference between a good prompt and a bad one can be the difference between useful output and complete nonsense.
Context Engineering: The Deeper Layer
But prompting is really just the entry point. Once you get past the basics, you realize it’s not just about the prompt itself – it’s about context engineering. And context is made up of multiple components:
Your prompt is just one piece. It’s your specific instructions, your ask, the task at hand, and the existing context in the specific session.
The system prompt is the agent’s configuration – its personality, its guardrails, how it approaches problems.
Tools and integrations determine what the AI can actually do. Can it access your documentation? Run commands? Query APIs? The same prompt with different tools available produces completely different results.
RAG (Retrieval Augmented Generation) is about giving the AI access to your specific knowledge base – your internal docs, your runbooks, your historical data. Instead of relying solely on the AI’s training, you’re augmenting it with your organization’s specific context.
Working on itential-mcp, we learned this lesson the hard way. One of the first things we discovered was that even adding too many tools causes confusion for LLMs. You’d think more capabilities would be better, but it’s not that simple. The AI can get overwhelmed, choosing the wrong tool or struggling to decide between similar options. We realized early on that we needed to design for persona-based deployment systems – using tags, custom tooling configurations, and careful curation of what tools were available in different contexts. It’s not just about what the AI can do; it’s about giving it the right subset of capabilities for the specific job.
Key Insight
The same prompt with different context can produce wildly different results. Understanding context engineering is like understanding when to use a bash script versus Python versus an Itential Platform workflow. They’re all solving problems, but the right approach depends on what you’re trying to accomplish and what resources you have available.
Don’t Dismiss the New Stack
Look, I get it. Another skill to learn. Another thing to add to the pile. The stack keeps growing, and it can feel overwhelming.
But here’s the reality: this is the job. It’s always been the job. Skills are additive in IT. You don’t stop needing Linux knowledge just because you learned Python. You don’t abandon scripting because you’re using Ansible. Each layer builds on the last.
The difference is timing. Early adopters of Python in network engineering had a significant advantage. They were automating while others were still doing everything manually. The same is true now with AI and prompting. The people who develop these skills early – who understand not just how to write a prompt, but how to engineer context – are going to have an edge.
Don’t be dismissive of these new skill demands. I’ve seen it before – the “I don’t need to learn that, I’m not a programmer” attitude when Python started becoming essential for network engineers, or when infrastructure-as-code first emerged. Those skills eventually came back around. The people who waited had to play catch-up.
This time? Prompting and context engineering could be what makes the difference in your career.
The fundamentals haven’t changed: we’re still automating, still solving problems, still trying to work more efficiently. The interface has just evolved.
And apparently, instead of “give me your script,” we’re now saying “give me your prompt.”
Same energy. Different tool.
See This in Action
Watch me prompt my way from scripts to a usable, governed service in minutes here.