Rich Martin • 00:02
Hello, everyone. Welcome to another Itential webinar. And today’s topic is how to orchestrate and productize network automations as services. And that’s a mouthful. My name is Rich Martin, Director of Technical Marketing, and I am joined by my esteemed colleague, Dan Sullivan. Dan, tell us a little bit about yourself.
Dan Sullivan • 00:20
Dan, tell us a little bit about yourself. Thanks, Rich. I’m a solutions architect here at Itential. I’ve been here about five years working on pre-sales with a lot of our large enterprise customers.
Rich Martin • 00:32
Thank you, Dan. Now let’s get back to the topic at hand because this is like I said a mouthful but we want to really break this down and to try to you know into like a consumable form for everyone. So really what we’re talking about here is maybe for some folks a little bit out in the future but what we want to try to do is map a trajectory for everyone so that they can see that at some point if they’re talking about or they’re looking at delivering services as products that not only are there very specific stages that are important to get there but also how those stages interact with one another and of course because this is an Itential webinar how Itential can help you get there into using the different parts of our applications in parts of our platform. So in this particular demonstration, we want to achieve a product, we want to achieve a product, but we first got to have this idea of a product mindset. Really, if you’ve tuned into Itential for any amount of time, you know everything starts with automation. Automation in today’s world is really about looking at tasks that most of the time practitioners like network engineers are trying to solve through some automation framework or automation tooling. It could be Ansible, it could be Terraform, OpenTofu, it could be Python, right? But they’re trying to build automations to accomplish tasks, to make their jobs more efficient, to remove backlog, those kind of things. But you also, on top of that, kind of that next piece of that is orchestration.
Rich Martin • 02:03
And in this demonstration today, we’re going to start off with orchestration. In fact, we’ll talk a little bit about that because we want to show how orchestration is leveraging all of those things, all those automations together, which finally takes us to the ability to do productization. So we’re in this case, we’re looking at all of these automations that can do particular tasks, building things, creating things, looking at things in a different network infrastructure domains, orchestration, tying them together, and then working with somebody who is now more like a product manager, and that’ll be Dan here, so that we can now work together closely to leverage the automations as me, as the orchestration. architect in this scenario, I can leverage those orchestrations and some of the challenges involved in each of these roles and how we can accomplish these things together. We’ll start off in my role as orchestration. In this case, we’ll have leveraged a bunch of automations that have been built by domain experts. In this case, we’ll have automations that have been created using the tool sets that are important and relevant for folks like Cloud, Cloud engineering teams.
Rich Martin • 03:13
Maybe they’re using OpenTofu or Terraform. Data center teams may be leveraging Ansible, Playbooks, Python scripts, and even security teams doing the same, leveraging those types of tooling in order to automate their respective domains. As an orchestration architect, I am going to now take a look at all of those things that are exposed to us. In our platform, obviously, building an automation and exposing it to an orchestration framework or orchestration tooling doesn’t come free. But we’re going to fast-forward past that in our particular environment because we’ve done quite a few previous demonstrations of this, that you’re welcome to go back and take a look at how you can take an automation using the tools of your choice and then expose that into an environment where we can put APIs on top of them and leverage them as an orchestration layer with our potential automation service and an automation gateway. But at this point, I want to insert myself as the orchestration engineer. I’ve got a toolbox of a lot of different automations, and I’m thinking in terms of how do I stitch all of these things together in order to create a service?
Rich Martin • 04:21
Sometimes it’s not just the automation and the infrastructure that’s on top of mind for me. So in this role as the orchestration engineer, this is what I’m really looking at. And we’ve created this in a more organized way, but these are the things that I have to look at. If I’m orchestrating a service, in this particular case, it could be, I could be leveraging resources that connect together or work together from SD-WAN, firewall, cloud. I might be leveraging, like I said, automations that have been exposed by those teams, or there may be controllers involved where I have API, direct API access to those particular systems. And it’s my responsibility to now understand what’s available to me, both from exposed automations, the systems, the network and infrastructure and cloud systems that I have access to that I might be using, as well as all the other systems that we have in our IT ecosystem that might be relevant in order to build an orchestrated process. Because now we’re going from tasks to processes.
Rich Martin • 05:26
And it’s not just stringing one automation to another automation. It’s really thinking about what other systems do I need to integrate with? Do I need to integrate with Git? Because that’s a tool that’s being used by our development teams. And it holds a lot of information that are relevant to creating a service. Or are there monitoring systems? Or are there ticketing systems and change management systems that I have to integrate with?
Rich Martin • 05:51
So as the role of this orchestration engineer or orchestration architect. I have to have not only access to this, but be aware of what it is and have a methodology to use them all together. From my perspective, going from automation to orchestration is that next hop. I’m going to work very closely with those folks who are building automations, but at the same time, my world’s a little bit bigger, a little bit broader, because ultimately, I’m going to have to work with somebody like Dan on the product side to leverage my work, just the way I’m leveraging the automation team’s work, to leverage my work to help Dan out in his role as a product manager. Dan, as we go to this next stage, let me talk really quickly here and then we’ll talk about how we interact with one another. In the Itential platform, what I really want to show you in my section of this demonstration is how I would go about very, very quickly in a way to iterate and stitch these automations together, how I would leverage a way to normalize all of those things that I need to use as part of this process into an automated workflow that allows me to use automations, to use integrations to those different systems, and to build a methodology that I can not only just run one after another, but intelligently build some logic into it. and extract the data and manipulate the data between one API call to a system or a call to an automation that’s been built so that I can leverage that information into subsequent API calls or tasks on these workflows.
Rich Martin • 07:20
And again, building the logic within this because while there’s probably some logic in the automations I’m using at that layer, there’s needs to be an overall logic in the workflow as well. And then ultimately my job doesn’t stop there because I have to keep be mindful that whatever I do from my perspective as the orchestration engineer, we’re trying to build this into something that’s a product that we can offer to our end users. In this case, it would be like application developers. And I have to work with Dan because he can help build the gap between what I do and what really needs to be exposed to those teams. And so with that, I’m gonna switch it over to Dan and let you talk about what that means on the product side, how we productize things, and then what the role of the product manager here would be. Sure.
Dan Sullivan • 08:07
I mean, I guess first at a really high level, I appreciate my, you know, I appreciate that Rich is working hard on the orchestration side and he’s tying together all these different assets, but at the same time, I’m not sure how much I care about that. And I’m not sure that my users care all that much about it. So they have some different requirements, right? They want to, They want to actually be able to consume a product. In our case, and a lot of the customers that we talked to in the enterprise, what they’re actually looking for is an API. They want a simple API, they provide some parameters and off it goes.
Dan Sullivan • 08:45
They’re not necessarily all that interested in how it’s done, whether it’s three Python scripts or Terraform, or however it’s done, but we want to offer them a way to actually define that service that they want and consume it, in a way that makes sense for their environment. And in addition to that, I mean, orchestration kind of gives you some of that, but one of the other things that we’re going to talk a little bit more about is the fact that all of these orchestrations are largely stateless. So a lot of our customers have requirements around these products that they want to offer. They need to track them. Maybe it’s for audit purposes. They need to understand, hey, I have a specific number. They want to know how many access, how much of their data is accessible, how many applications are configured to use it.
Dan Sullivan • 09:47
They need to know all of those things. And so that’s kind of what productization will do, is to sort of tie that all together and offer that in a way that’s consumable to our customers, really, if you think about it. So again, we have application developers trying to order a product, and again, they want an API. They don’t want to call 15 workflows. They want a single API, and over time, we’re going to need to track those, maybe for audit purposes again. And we’re going to give them, they’re going to require some current operations. So for a particular service, that can be different.
Dan Sullivan • 10:29
It’s not going to be the same for every one, and we need to be able to have a system that’s flexible enough to meet those requirements. But as a product manager, that’s kind of my job, is to figure that out, get the appropriate orchestrations and combine those and offer that product to the application developers. Now, over here, what you can see is, again, on the very right, you’ll see that box that looks a lot like the work that my colleague, Rich, was doing. So that’s sort of the orchestration piece of this. In our case, we’re going to create a product, and our product will actually be a secure cross-connect. For today, that’s what we’ll be talking about. But what we’re going to do is we’re going to define that product that we want to offer in JSON schema.
Dan Sullivan • 11:17
and we’re going to use workflows and data transformations to implement current operations. We’re actually going to publish these as APIs so that we can create and manage the product. Then later on, we’ll be able to, for auditing purposes, we’ll have a history, we’ll be able to check who did what and when, how many instances are active, or what the current state is, all of those sorts of things. We can possibly have troubleshoot. But now we actually have a real product, and the result of that is we have something that’s auditable and something we can report on. A lot more useful than some fire-and-forget set of workflows or orchestrations or even going back two levels. Those automations that we’ve created are useful, but in and of themselves, they are not defining a product, and they don’t provide a lot of the services that we’re trying to add here.
Rich Martin • 12:18
Yeah, great point, Dan, I appreciate that. So you can see how this builds on one upon each other. We actually all need each other in order to accomplish the big goal here, which is in a very unique way, it’s giving the power of creating the infrastructure that the end-user needs, in this case, an application developer, in terms that they’re familiar and they want to use. In this case, it’s delivering all that by API. And us tracking the state of it within ITENTIAL, but allowing them to manipulate those resources as they see fit to use them, even until the point where they’re done with it, and they can just destroy or delete those resources themselves using APIs.
Dan Sullivan • 12:58
Yeah, Rich, I think it’s kind of interesting as well, because they have a little bit of an abstraction here. They don’t necessarily have anything to do with Itential, they’re just hitting an endpoint to make this all happen as well.
Rich Martin • 13:10
Yeah, and this is exactly what we’re looking at the demo architecture for today. At the very top, we’re going to create something called the secure access cross-connect product. And so think in terms of a very simple scenario. You have an application developer who has now just spun up a new web server in a DMZ. they don’t have direct access to a database server in their secure zone. Normally, what they would have to do is open up a ticket, maybe in service now, that ticket generates requests to three or four different teams, maybe Cloud security, firewall teams, in order to do their pieces of it. They’ll get around to it when they get around to it, and then eventually when everybody gets their piece done, whether it be two hours, two days, or two weeks later, the application developer gets their infrastructure, hopefully it was all built correctly, and they can connect one thing to another thing.
Rich Martin • 14:00
It’s as simple as that. To your point, Dan, from their perspective. I don’t know how many teams it takes. I don’t know what infrastructure, what resources, but I just wanna request the thing and get it. In this case, wouldn’t it be great if I could just use APIs because that’s what I’m using already and request the thing just like I can spin up a server using APIs, using some programs or platforms or tools we’re using, as well as spin up the infrastructure too. And that’s what we can do here. What we’ll show you today, we’ll give you like that overview of how that works.
Rich Martin • 14:32
On the bottom row here is kind of the network automation piece of it. So that’s where those teams, those domain specialists who are building automations for the different network domains, firewall, security, network, data center, cloud, they’re using the tools of choice to build automations down kind of at that task level, exposing them to our platform through the automation gateway. The automation gateway helps them to manage all of that, the execution environments, the RBAC, all of those things. So there’s a lot of value there. But at the same time, as we kind of go up the stack here, exposing those to the orchestration role, the orchestration architect, so that I can now leverage that along with all of the other integrations to those systems that have been done in the Itential platform, so that I can build the workflows that then go into helping Dan accomplish his job, which is what I’m all about is helping Dan accomplish his job.
Dan Sullivan • 15:28
And don’t think I don’t appreciate it.
Rich Martin • 15:31
I don’t think I don’t appreciate it. But this is how we build upon that stack to ultimately provide that service, that product as a service through an API so that the app developers can leverage when they want it, how they want it, consume it the way that they want to consume it.
Dan Sullivan • 15:49
But it’s kind of unique too, because we’re actually consuming those automations without necessarily having to make the automation developer all that aware of how we’re productizing this and what we’re actually doing with it. Yeah, they’re absolutely- Once those automations perform those functions, the rest of it we can take care of.
Rich Martin • 16:06
That’s right. Yeah, they’re absolutely… Like you said earlier, they don’t care what’s under the hood, how many scripts you run, what it takes. If I call that API, how long does it take to get the thing I asked for? That’s what they care about, because they need to bring this new application or service or something up, and that’s what’s critical to them.
Dan Sullivan • 16:24
A hundred percent.
Rich Martin • 16:26
So, like I said, this is kind of a way we’ll show you how this can be done in this example. We want to be able to create a service or a product rather, that is a service that can be tracked over time and offered via API. So the key to this is the end user, the customer here, the application developer, like anytime we are customers, we’re blind to what’s actually going on because it’s not our problem. We’re ordering the thing. We want the thing we want. And however it needs to get done, really the critical piece is how quickly can it get done? And in this case, the secure access product is composed of both.
Rich Martin • 17:05
a physical and virtual infrastructure, Cloud infrastructure, as well as physical infrastructure on physical routers, as well as firewall services. This is a very typical way of connecting something up. You just don’t build a connection, you build a secure connection. When we connect a web server to a DMZ server between zones, this is a requirement for all of that. From the end-users perspective, I just need the thing to work, and you guys need to make sure it’s up and it’s not just working so that I can ping and get traffic between my web server and my database server, but it’s also their responsibility for ensuring it’s secure over time as well. By leveraging the automations, the orchestrations, productizing it, tracking it, that’s what we want to show the ability to expose all of that, all these assets to the end-user in a very transparent way to them. But really, all of that hard work is being done in the platform using all of these assets that all of these teams are leveraging together.
Rich Martin • 18:08
With that, let’s go to the demo. I’ll start us off. Let’s see here. Okay, so I’ll start us off. So remember, my role here is the orchestration architect. So remember my slide, I’m sitting in the middle of a lot of stuff. On some of the stuff I have is automations that have been exposed through our automation gateway and our automation services that I should have access to that I need to leverage, I need to run those. But I also have integrations to different systems because I’m thinking in terms of like business processes that includes building out the infrastructure for a particular service.
Rich Martin • 18:51
So in this case, I would have had a discussion with Dan and he would say, hey, this is the service that the app developers are requesting. What are the pieces you need to figure out the pieces, the processes and the systems that need to be tied together in order to do that. So that’s where I start my role. So now I’m in the workflow canvas, which is part of our studio in the Itential platform. And this is where I start to build all that out. One of the cool things here is I’m gonna show you the stages that I would typically go through. We don’t have time and we really wanna showcase the productization and the tracking piece of this, but I wanna give you a glimpse and a taste of what this would look like on the day to day as an orchestration architect in our platform.
Rich Martin • 19:32
So in this case, my question is, where’s all that stuff hiding? Here we have a task palette that gives me all that stuff. What is that stuff? That’s all the automation services that have been exposed. I can leverage what we call these tasks, or you can think of them as workflow steps, along with integrations to different systems like GitLab here. If you look at these integrations, if you’re familiar with the APIs, when we integrate with systems that have APIs, we leverage them at the API method level. Each one of these is an API call into a functioning GitLab system that we’re using. I can drag and drop these as necessary and start to leverage the API calls directly, just like I can start to drag and drop automations and other types of logic tasks in this.
Rich Martin • 20:19
This is how I start to build my workflow out. But really and truly, I have a lot of things here and that’s the nice part. We’ve normalized access to all of those tools, all of those things, all of those integrations that I as the orchestration architect would need access to into one spot, into one place, and they’re all drag and drop elements, regardless of where they came from or how they integrated into the system. That’s really nice from my perspective as a quality of life, I can start to just drag and drop things. But where I really start, because this is more of a task of what do I need to do first, what do I need to do second, is I like to start with just stubbing it out, almost whiteboarding it, if you will. I can do a search, I can get the stub task here, and this is a task that’s built into the system, it’s one of the tools, and it literally just is a placeholder. All it is is a placeholder.
Rich Martin • 21:07
I know for this particular service, I need to run three automations, I need to run something from the security team to update a firewall rule, I need to create something, I need to run an automation from the cloud team for AWS, and I need to run an automation that’s been exposed by the data center team in order to build a new route for this service. So I can start to stub this out, right? I can just say, run AWS service here. Oops, if I can spell. and I give it a name, and then I’m given a series of variables to fill out. In this case, it’s a stub task, so there’s really not a whole lot here. But in the case of an AWS call to do something with a VPC, you would see the variables matching the API call requirements that we get from an open API spec that we’ve imported into the system.
Rich Martin • 22:03
That’s how you integrate things into our system very easily, and that’s how we’re aware of what those calls are here. But in this case, I can just stub something out, and then I can start to connect the dots, so to speak. So we start with a start and an end, and my first step could be something as simple as, okay, I need to run an AWS service, right? And then I would start to troubleshoot and test these things as the orchestration architect, as I start to build this out. So I know, okay, I’m gonna do something in AWS, I’m gonna run an automation that’s been exposed as a service in our platform, and so this is kind of my placeholder. And I can start to now iterate over this, add more stub tasks, and then eventually replace them with real calls into those systems, and start testing this over time. Now, that would take us to something like step two.
Rich Martin • 22:49
where I’ve now taken that one task where I just stubbed it out as a task and I’ve replaced it with something called a child job. Here a child job runs an actual other workflow. You’re looking at a workflow now. we can build workflows in a modular way. As an orchestration architect, this is really critical for me to understand. I should start building workflows in ways that I can reuse them and not have to rebuild them again. That’s what a child job is great for, running other workflows.
Rich Martin • 23:18
Those workflows in general are modular workflows that I’ve built before. In this case, it’s running in that AWS service. Now, I’ve bookended and surround it with two other tasks here. This is fairly normal when you start to build a workflow in our environment. Because this is a call to an automation that’s been exposed through our automation gateway that was written by the Cloud team and exposed to us. Honestly, it really doesn’t matter even to me at this level what they used. It could have been Python, it could have been OpenTofu, it could have been Ansible.
Rich Martin • 23:53
I don’t know because if I double-click it, I get a series of data, and I’ll show you how we run that in a minute. But if I double-click it, I see variables that have been exposed, and that’s what I really focus on. What are the variables that need to be filled out in order for this automation or this API call to be run successfully? But that being said, when you make an API call, not only do you need to feed data to it, you’re going to get data back, and that’s where I bookended with this. Again, you’ll see this fairly typical, especially as we progress this workflow, is we can use something called a transformation. This is a task that’s built into our platform that helps the orchestration architect extract data from different data sources, maybe JSON payloads that have come from different sources, previous API calls. And so you can extract the data you need in order to pass to a subsequent task, right?
Rich Martin • 24:46
Because when you run some, the data that comes from an API call and our platform is going to be formatted in JSON, but you may only need one or two elements from an entire JSON object, right? And so transformations give us a visual way of extracting, identifying it, modeling it, and extracting what we need. And these themselves are also modular in nature too. So I can reuse them. So I build them once and reuse them. And feel free to look back.
Rich Martin • 25:12
We’ve done some webinars just on transformations because they’re very powerful. So I would use a transformation in order to extract data from something so I could give it to the child job that runs this AWS automation. And I would feed it the data it needs in the format it needs. Now, whenever some API call or when some task is run, there’s going to be some sort of data output. And so that’s where I’m bookending here with a query. A query allows us to take the data from a previous task and extract it. the pieces that we need so that we can take a look at that.
Rich Martin • 25:47
Maybe later down the road, we’ll do an evaluation on that data because we’re looking at something like, did something successfully complete or not? And so this is where you’re starting to build that logic into these workflows. So while the automation that I might be running has logic built into it from the developer, the automation developers level at that task layer, now I get to build more logic that’s more relevant to the process that I’m building on the workflow, or the orchestration layer. So I’ll show you really quickly what it looks like to execute a workflow that runs one of those automation services. So again, remember I said that we can build workflows as modular workflows? Well, we have a modular workflow to run a generic service here built. We’ve kind of put it under a folder so that it makes it easier to identify and run as a child job.
Rich Martin • 26:38
And I can do some testing from here. I can pass a payload into this workflow and let those variables be extracted and utilized by those tasks so that we can see everything running successfully or in some cases unsuccessfully. And this is what I would iterate on as a orchestration architect. So when I clicked run, I gave it a payload. I clicked run. Now it’s taking me to where the job is actually executing. So it’s creating an instance of the workflow, executing it with the payload that I gave it. And now on these tasks, if I double click it, it gives me the live data that I pass to it.
Rich Martin • 27:12
So this is what I pass to it. And then the data that it gave me back as a response, the screen checkmark means that that call to run that automation was successful. And here is the data that was returned. Now, remember the transformations? This is where you would use transformations to extract the data that’s relevant for your particular process. So as an orchestration engineer, these are the tasks that I’m doing. These are the things that I’m looking at day to day as I’m building these out.
Rich Martin • 27:42
Now, I’m going to go back to our workflow steps here. We left off at step 2, which was running an automation, just one automation. Remember, in this case, it’s going to require three automations to run. Step 3 here is now I’ve basically copied and pasted that same logic. for one automation, and I’ve updated for the three automations I’m going to run. Because all of them have the same scenario going on here. I need to feed some information to the automation before I run it, and when the automation runs, I need to query some information out of it to determine an evaluation later down the road.
Rich Martin • 28:19
So while this looks complicated, it’s really not. It’s exactly what I showed you in the step two with the single automation execution. But now I’m running an automation from the data center team and from the firewall team as well, and I’m stringing them together, bookending them with the things that are required to pass data and reformat data successfully so each one of those automations can run. And then eventually here, we get to an evaluation task, and then we set up for, and this is where I would work with Dan, what is the information Dan needs in order for his piece of this on the product side? So this is what this starts to look like. And then on the final step, step four, is where I would build some additional logic in, especially around error catching or if something fails in one of these three automations, we shouldn’t, what should we do at that point? Should we continue?
Rich Martin • 29:08
Should we alert somebody? This is where your business logic of what could happen and what needs to be done at each one of these stages if something doesn’t go right, something doesn’t provision correctly. And this is where you start to layer on that logic that’s important for this process, that’s not relevant for the automation task that we’re leveraging. So this is that broader view, and I need tools to continue to build those out and make those things work. At this point, what I would now do is, as I built this and tested, I could say, okay, now I’m leveraging all three of those automations. I can create this new service. This is where I would start to have a conversation with Dan around the creating of the services, the first stage of things, but really, we need to do more workflows to orchestrate more than just creation of the service, because we’re putting the keys of managing this through APIs in the hands of the developer, the app developer.
Rich Martin • 30:09
What other API calls do we want to provide for them? I’ve spoken to Dan and Dan says, you know what, we need to do a create, so that they can ask for the service, that we need to do a delete. But they also have requested a disable and enable. This is where I would start work, Dan, to build the additional workflows. You can see these as CRUD operations.
Dan Sullivan • 30:29
It looks like you got those all set for me, Rich. That looks great.
Rich Martin • 30:33
Yeah. Well, you ask and I provide. That’s the way this works around here, Dan. Excellent. With that, I’m going to allow Dan to go ahead and share his screen, and really show you the next stage of this, which is working in our platform to build the stateful connection tracking, and leveraging all these workflows to actually present a product through APIs to those end-users.
Dan Sullivan • 31:02
Rich has been in our project here, running through some of the workflows, and the orchestrations that have been created. These are the actions that we’re going to have. This looks pretty good. It’s just what I need. But the problem is, I really need to, these workflows will run, but they don’t really preserve any state. I’m really going to need to figure out how to preserve some of this. We’re going to start there. The way that we’re going to do that is using what we call, Attentional Lifecycle Manager.
Dan Sullivan • 31:41
I’m just going to click into Lifecycle Manager here at first. And Lifecycle Manager is an application essentially built on top of workflows. So what we’ll do is we will walk through defining a service, and then we will map it to actually use those workflows that Rich has worked so hard on. Over here on the left, you can see that you can define a number of services. We’ve only got the one defined in our demo system here, which is the secure cross-connect service. Now, interestingly enough, if I click on this, you’ll see the model tab is highlighted. Essentially, what I’ve done here is I’ve modeled the service that I want to expose in JSON schema here.
Dan Sullivan • 32:29
I’ve got a few parameters here. The service type, the name of the service, an email contact, and even the duration of the particular service, and maybe some variables around whether I might want to renew it. So I’m going to give it a duration, but I might even allow a renewal process. Later on, I’ve actually got some state configured and some status. So I’ll actually be able to know at any given time the state of a particular instance of my cross-connect service. And so I’m going to model this in JSON schema, and then I’ll be able to create individual instances of this service. Now, of course, the next thing to look at, in addition to the model, we’re going to, again, model, we’re going to specify the actions.
Dan Sullivan • 33:24
And so we’ve got the create action, delete, and enable, and disable. And these map directly to those workflows that Rich was showing us. So every time you want to add a specific action, you’ll in turn create some low-level workflow that actually carries the work out. But this is kind of where you can do that. We’ve got just the four for our service, but that should be sufficient for what we’re trying to do with our cross-connect service. So now we’ve seen the model, we’ve got our actions defined, and then the next thing to click on is the instances. In this particular screen, we’ll see whenever instances are created, they’ll be shown here at this point.
Dan Sullivan • 34:06
We don’t have any configured actively, so you can’t really see anything. Now, over here, what I can do is click on Show Deleted. In this case here, it shows that I’ve created and deleted one of these demo instances quite a few times. But interestingly enough, what I can do is actually view it. I can look at the data that was associated with it that we are maintaining, and also I have the history for this. Even when you delete some of these cross-connect instances, my tensor actually retains some historical data regarding that. That’s actually going to be pretty useful later on if I have to go back and do some audits around a particular service instance.
Dan Sullivan • 34:50
Now I have all of this saved here, and that’s pretty good. But again, I’m not really interested in just looking at the actions that have been deleted. Now what I’m going to do is just go back to the dashboard here, and we’ve seen our Lifecycle Manager, we’ve looked at the model, and now I’ll jump back into that project. What you’ll notice here is right at the top is a folder which is Lifecycle Manager. Lifecycle Manager, these are the workflows that will directly be called by those actions that we’ve defined in LCM. So as soon as one of those, we do a create, it’s going to call this LCM create and run that particular create action. So if I click on create action, all you’ll see here is that eventually it’s just going to call into the secure dbaccess create.
Dan Sullivan • 35:43
So it’s going to call these secure dbaccess create, delete, disable, and enable child jobs. So Rich has actually completed these and made them available. And for the purposes of this demo, these are all on the same project, but they could easily be referenced externally from this as well. So you can keep that stuff module if you wanted to reuse it. But for today, we’ve got it all in one. And we’ll do that create access, we’ll validate whether we were successful or not, and we’ll even send a notification out. So we’ll send an email notification out at the end once we provision the service.
Dan Sullivan • 36:25
So if I go down and look quickly at the security dbaccess create, well, it’s just doing exactly what Rich was talking about before, creating the connection, adding some routes, and then finally a security rule. Then we’re going to evaluate the results of this and make sure everything worked as we thought it would. So now we’ve got our workflow set up. We have our product defined. What we actually want to do is to expose this as an API. So that means that someone else might want to consume this all via an API call. So for today, we’re going to do that by a postman.
Dan Sullivan • 37:11
So we have some postman calls set up to create and do some of the other actions that we’ve specified. We also have our web app set up over here. I can actually validate if things are going well or not by just trying to log in. So by trying to log in, in this case here, didn’t like that too much, we’ll try it again. And you’ll see that it’s basically going to tell us that the database is unavailable. So in other words, it’s not really going to let us log in if it can’t get to that database, it needs that secure database access. Maybe there are some customers, maybe there’s customer data in that database or something that that application needs.
Dan Sullivan • 37:56
And right now that services and provision, so we can’t actually, we can’t actually access it. So then what we can do is just from Postman alone, I’ve got some variables defined so that I don’t have to type this all in, but I’m specifying my service type, which is secure DB access. Now, I actually have some of the details of the secure DB access service, what it takes to actually access my secure DB server, and I have that defined in a file in Git. So I’m actually going to go to Git and get that data, I could have forced the user to type it in. But the reality is they just know they need this service type. I put myself as a contact in my email. Other than that, I’ve got some variable names predefined.
Dan Sullivan • 38:41
So I’ll just send this and we should see, we should see something happen on the, within the IAP in just a second. Let’s see. It says it ran the create action. Now I can go back to Lifecycle Manager, and let’s see what happened here. The cross-connect service. I’m going to take a look at the instances, and you’ll see now that the last time we took a look at this, we didn’t have anything provisioned. If we view this, you’ll see that it’s displaying the actions we can run. We can actually do this from here if we wanted.
Dan Sullivan • 39:21
History, it says it finished that create operation. If I look at the properties, it’s saving some connection information from the Cloud. It’s basically saying that the service is up and it’s enabled. We provisioned it saying it’s going to last for 90 days, and we didn’t set the renew flag here. Now we’ve got our cross-connect service configured. If we go back to our portal, hopefully we get a slightly better result here. Let’s see. It looks like we were able to log in.
Dan Sullivan • 39:59
Our application is somewhat disappointing. We put a lot of effort into the login page, but not the portal itself. Anyways, I’ll log out again. Now, what I could do is I could actually go off and decide, hey, we think that something’s going on, maybe there was a breach to that application. We want to disable that instance, so I can just hit that here. You’ll notice that all of these API calls are acting on a single instance at a time. Later on in our next release, you’ll see that we’ll actually have bulk action, so you could invoke the same action on many instances at the same time if you needed to.
Dan Sullivan • 40:39
You’ll also notice that if I click, I didn’t really click on this before, that every time our cross-connect service runs, it sends us some e-mails. Here’s the first one. Here’s the second one saying, it just disabled it. This was the original one that told us, hey, it successfully provisioned it. And now if I go back to the portal, it shouldn’t let me log in anymore because the database is unavailable. So let’s see what happens here. Yep, so it says again, hey, the database is unavailable, you can’t log in.
Dan Sullivan • 41:16
And if I go and look at the lifecycle here, you’ll see again that the status is down and the state is disabled. So this data here is going to be persistent. So we’re running those workflows, but they’ll feed back data into LCM, and we’ll save the properties of data, we’ll make it persistent. So that model that we define, the variables that we define, will all be persistent and queryable via an API call and accessible via an API call as well. So now we’re actually able to kind of see that we’ve created this product, we’re able to access it via API, and so far so good. Of course, I’ll need to enable that again. So here’s the enable.
Dan Sullivan • 42:01
So we can send that one along, and hopefully good things happen and that comes back. So if I go back to lifecycle manager, we’ll see here that you’ll see enable was the last action. So if you look at the history, you’ll see the create, disable, and enable operations. So we’re actually saving those. If you just heard the ding again, you’ll see we got another email saying, hey, we just enable it. So it sends an email every time it touches one of these to whoever requested the service. But that’s something we just added in for the purposes of the demo.
Dan Sullivan • 42:35
So now we have a pretty good historical record of what’s happening. So I’ve got a reasonable product here. And so as I offer this product to multiple application groups, I’ll have an instance for each one of those, and I’ll be able to track them. And if I need to take one down, I can. If I need to enable or disable it, also a possibility. And there may be other actions I choose to add in the future. Maybe I’m going to add a troubleshooting one to go off and make sure and check and validate that the service is still configured the way that I want it.
Dan Sullivan • 43:07
So there’s a bunch of different actions I might be able to add, and I can expose all those via an API as well. So let’s see, at the last point here, I’ve enabled it. And let’s see here, I should be able to log in. So does that enable work correctly? Oh, didn’t like that, so let’s see here. So let me log in. So that’s great. So that worked out.
Dan Sullivan • 43:35
Then lastly, of course, I can delete the instance here if I want. So if I just send a delete, I should see that in a second here, it’ll update this and tell me it’s deleting it. That’s the last operation. If I look at the properties, right now it says it’s up and enabled. We should see that change shortly once it gets deleted. You’ll see that it’s down and deleted, and then the instance is gone because we’re not really showing deleted actions. That’s the productization piece in a nutshell.
Dan Sullivan • 44:06
So we’ve effectively taken automations that were created by our automation team. Rich has been able to orchestrate them, and now we’re just leveraging those orchestrations that he’s created, and we’ve actually defined a product on top of that. That was up to us as to how we defined our product. We have a lot of flexibility there, and operators and users of Lifecycle Manager can sort of define the products and the way that they want to interact with them just using our JSON schema.
Rich Martin • 44:35
Yeah, that’s pretty powerful stuff, Dan. Thanks for running us through that demo. I think myself as the orchestration architect in this, you mentioned something about adding new stuff. That sounds like more work for me. However, you could see the same process as we iterate over this. In fact, we can start to reuse a lot of the same workflows that we’ve already built for this in order to add new things, new modifications to those services that you and the end user have defined they needed. Hey, can you do something to upgrade one of those services, to modify a service in some way?
Rich Martin • 45:14
We would continue to work and iterate over that. I think that’s really cool because it really puts everything in the hands of the developers through APIs. That’s really what you were showing when you were running the Postman. calls back into this is this is the developers, the app developers now requesting, managing all of this stuff themselves. So this frees up a tremendous amount of time for operations and engineering teams that would maybe normally have to be called in to do some of this stuff themselves. Even if they’re automating some of these tasks, they would still have to be called in to run those things themselves in response to a request from an app developer. And really this is like, no, you guys are in charge of this.
Rich Martin • 45:54
This is your infrastructure. We’re just tracking it. We can troubleshoot it, but you manage it yourself.
Dan Sullivan • 46:00
Really at the end of the day, I think what we’re trying to do is keep pace with the pace of the application teams, right? Yeah. They’re able to turn things around and deploy new features and new applications. And we really have to make sure the network is keeping pace. Absolutely. A lot of the customers we talked to that that’s problematic for them right now, that they are not able to keep up. And they have very bespoke automations and they’re trying to do a ticket-based solution with people, with manual tasks interspersed in the middle of it.
Dan Sullivan • 46:31
It just takes too much time. We’re just not keeping up. So this is a way to do that and accelerate that process.
Rich Martin • 46:37
Well, awesome. Well, I think with that, we’ll wrap it up. I wanna say thank you so much, Dan, for walking us through this. I mean, you did the lion’s share of that. And honestly, this is the most exciting piece. I kind of did an overview of the orchestration. But if you’re interested in that, because that’s a lot of our bread and butter, is helping to orchestrate all of the systems, all of the automations that you have in your environment, feel free to reach out to us or check out another video where we really deep dive into that.
Rich Martin • 47:06
And I’m sure Dan and I have done that together as well. So if you wanna search for Dan and I, you can find us doing orchestration deep dives. Absolutely. But thanks again, Dan. We wanna thank everybody for joining us and feel free to reach out to us if you have any questions or if you’d like more information about Itential. All right. Thanks, Rich.
Rich Martin • 47:26
Bye-bye.