Rich Martin • 00:03
Hello everyone, welcome to another Itential webinar. Today, we will be talking about stateful orchestration with the Itential Automation Platform. As always, my name is Rich Martin, Director of Technical Marketing, and I’m no longer a solo talking head today. I’m actually joined by somebody I’m excited to co-present with, Peter Spregata. Peter, give us a little something about yourself. Thanks, Rich. Hi, folks. Peter Spregata. I run all of product management here at Itential, and looking forward to taking you through our first foray into stateful orchestration with Itential Automation Platform.
Peter Sprygada • 00:30
Thanks, Rich. Hi, folks. Peter Spregata. I run all of product management here at Itential, and looking forward to taking you through our first foray into stateful orchestration with Itential Automation Platform.
Rich Martin • 00:40
Awesome. For everybody, thanks again for joining us. As an overview, this is what we want to cover today. Let’s take a look and have a discussion around what actually is Stateful Orchestration. We’re going to talk about Lifecycle Manager, which is our application that enables Stateful Orchestration and your ability to track the state of different services inside of the platform and inside of your organization. The components of that, including working with resource models and actions, talking about real-world challenges that you can use Stateful Orchestration to solve for, and then eventually talking about managing service state over time, which is the third dimension or maybe even fourth dimension to all of this. With that, before we get too far into it, let’s start and set some context. Peter, what is stateful orchestration and what’s the difference between that and stateless orchestration?
Peter Sprygada • 01:27
Yeah, I think this is really important because the reality is that Itential Automation Platform has always been at its very core, a stateless-based orchestration and that is not changing in any way, shape, or form. Even though we brought some stateful capabilities with the introduction of Lifecycle Manager, which we’ll get into in the next slide. But it really is a scenario where customers have very specific needs that tend to map very well to providing a stateful approach to how they want to orchestrate their network services. When we step back though and look at the differences between them, the way you approach orchestration in a stateful orchestration way is quite a bit different than taking the stateless approach. Typically, when you go down the path of providing stateful orchestration, you’re going to spend a lot of time upfront really thinking through what is the data that ultimately I need to coalesce and create, to ultimately realize my service on the infrastructure. What we find is that in a stateful orchestration approach, that the data model becomes very tightly coupled to the service or the instances of services running in the infrastructure.
Rich Martin • 02:55
Of course, that is contrasted to in the more stateless approach where we don’t spend a lot of time upfront defining data models. What we do is we actually just start orchestrating and we figure out the rest of it on the fly as we build up the services. One of the other really interesting points about stateful orchestration and love it or hate it, and there are plenty of folks in both of those camps, is that stateful orchestration tends to follow a pattern of transitioning from one state to the next with a well-defined set of rules around what has to happen in order to transition from one state to the next. Again, contrasting that to the stateless approach where really services can transition at any time to really any state. A couple of other just kind of key points to point out around the difference between the two is the fact that when looking at a stateful approach, one of the big driving benefits is that over time, we don’t have to make sure that all of our back end systems are in sync with what may be deployed on the infrastructure itself. So we hold all of that information really within the stateful service data model. And therefore, we can come back a day later, a week later, a month later, a year later, 10 years later, whatever the case is, and we can continue to operate with that service without having to concern ourselves with collecting all of the data from back office systems and hoping that what they’re telling us is actually what’s been deployed on the infrastructure.
Rich Martin • 04:35
Stateful orchestration systems are, while there’s a level of complexity that comes with them, you get a lot of benefits though with that upfront planning and being able to really define your data model implementing it. However, it does come with the challenge that at times it can be difficult and slow to adopt to changes in business requirements, versus the more stateless approach which is able to quickly adapt to changing business requirements. I think to summarize all of this though, it really comes down to one is not inherently better than the other. It really just comes down to, like it does in so many times in IT, is picking the right tool for the right job and ultimately being able to deliver on creating services in your infrastructure. So, we introduced, and actually we introduced this in our 2023.1 release of IAP, although at that point it was technically considered a preview technology, but with the forthcoming release of 2023.2, we now consider Lifecycle Manager has reached a point where it’s ready for carrying production workloads and becoming generally available for customers to implement. So, Lifecycle Manager is actually the name of the application that delivers on the stateful orchestration capabilities delivered via IAP. And specifically, we built Lifecycle Manager to run on top of our workflow capabilities.
Rich Martin • 06:11
By attaching stateful capabilities on top of our workflows, we now have the ability to leverage all of the work that has gone into building workflows for providing a variety of activities on the infrastructure, for creating, reading, updating, deleting services, and more. We’re going to talk about that here in just another slide. But it’s Lifecycle Manager that really brings all of this capability. In fact, it is so flexible that we actually have the ability to not only build and model the technical requirements of a given service, but we also have the ability to collect in our data model things like ticket IDs or help desk information or troubleshooting components. So we truly have a full representation of what a service looks like that’s all built into the data model for delivering on that service. We really do that by utilizing two key elements within Lifecycle Manager, and that is around the idea of models and actions. Models, simply put, are how we define what a service ultimately looks like with all of its key value properties. We define this in Lifecycle Manager in JSON Schema, and it is really the representation of a service that ultimately a user could go then deploy across their infrastructure, could deploy multiple times, multiple instances.
Rich Martin • 07:44
It defines all of those key properties that we want to track, whether it’s things that are provided upfront by way of configuration, whether it’s metadata attached to the service, things like say ticket IDs, as I mentioned, or it’s things that we want to learn dynamically as that service is being created. Maybe it’s things like IP addresses or VLAN IDs or VIRF names or whatever the case may be. All that is defined in the model, and then that model is completed as we go through the process of deploying instances of that service. It is key here to note that within that data model, we do have that ability to not confine ourselves just simply to the technical aspects for realizing the service, but we can also add properties that allow us to troubleshoot and or debug and or provide enhanced levels of systems integration. We’ll talk about this in a moment here as we move on to actions. But the really key takeaway is that we can define a service the way we want to define it. We’re not locked into someone else’s interpretation of what that service may be.
Rich Martin • 08:55
We’re able to define that service in any way, shape, or form, make it as broad or as deep as we need to ultimately satisfy our use cases. Now, if models define our service, actions are exactly what they sound like. They allow us to perform some activity against or using that model. Now, the obvious things are the general CRUD activities you would see in any system. The ability to create one or more instances of the service, to be able to update a service, to read a service, to delete a service. Certainly, those are where just about everyone starts when building a stateful orchestration capabilities. But in the way that we’ve defined Lifecycle Manager, and really taking advantage of what IAP provides from a workflow capability, and the fact that we map actions to workflows within IAP, we have the ability to go beyond just simple CRUD activities.
Rich Martin • 09:54
We can actually define different states and actions that allow us to perform more operations on the infrastructure. For instance, I could say have deployed a set of BGP-speaking routers on my infrastructure, and I could define them all up front using a model and put them into service. And then if I want to come along a few weeks later, a few months later, and I have to do maintenance on my infrastructure, I could have an action that defines taking a BGP-speaking router and putting it into some level of, say, a maintenance mode concept, right, where I can drain traffic in a logical way off that router before I start that maintenance activity. Now I’m free to perform the work that I need to do, and then when I’m done, I can have another action that says return it to service. These are all examples of things that I’m actually able to do with Lifecycle Manager by mapping it through the workflows and implementing actions that interface with those services. Really what we’re trying to do though is we’re trying to provide capabilities within IAP to help customers solve real-world problems. As I said, there’s no one right way or wrong way to do these things, and it really does become environment dependent.
Rich Martin • 11:19
It becomes dependent on just how different customers want to go about the process of managing their infrastructure. That’s really the key of what we’re delivering with our stateful orchestration services and leveraging Lifecycle Manager. Rich, take us through here some of your thoughts on some of the real-world examples and challenges that we’re solving with Lifecycle. Yeah. The flexibility is enough that it can really range from something very specific like your examples, a cluster of BGP routers in the datacenter, or in the example demonstration we’ll get to in a moment, is like hybrid Cloud connectivity so that two application servers can talk to one another, and it can be a lot of things in between. We can think in terms of categories a lot of times. Being able to track network services that may be across different network domains and have a unified place to track them, that’s the service instance. You can do things like, you mentioned maintenance on BGP routers, but there could be a use case around the ability to tag a lot of just normal routers and switches in your network environment to understand where they’re at with their current software firmware load, and to determine whether or not, once you’ve understood where they’re at in their current firmware, whether or not they’re up to date.
Rich Martin • 12:46
And then the state could be needs to be updated or critical update needed, things of that nature, as well as any kind of complex network service management that might be across cloud, data center, SD-WAN, trying to track all of the resources across all of that. And we also have customers who have come to us that are looking at things like, how do we more effectively manage a service that may take a step in that process or a stage in that lifecycle a long, long time, like ordering last mile connectivity through a different service provider, right? That could take weeks or months in some cases. And so a lot of times things kind of fall off the table, but being able to model that into this application and then keep track of it over time and know immediately where all these services at, which stage they’re at and the data, the most up-to-date data without having to go to five or six different. sources of information, swivel chairing around. I mean, that’s what all this is about. And then, you know, there’s an interesting use case around being able to enact policy over maybe some security services.
Rich Martin • 13:56
So, you know, we’ve had customers that are trying to allow some more self-service services for things like firewall filtering and firewall blocking. But the downside, the upside is to offering self-service to all of your internal customers is that, you know, quickly and immediately start to generate the services that they need. So for instance, firewall blocking services for different applications and ports for different applications. But they can very quickly lose track of that, right? They’re gonna use more and more and more of it. So sometimes, you know, one of our customers has said, well, we’re gonna open this up, but you know, there’s a policy behind it. Every 30 days, it gets removed or it has to be refreshed.
Rich Martin • 14:37
And so Lifecycle Manager can be used to be able to not only track that, but to also… remove, deactivate these things from services or from service or even decommission them and return all of those assets back into you know into the pool of whatever it may be whether it’s removing firewall rules or IP addresses, returning IP addresses back to IPAMs, things like that. You’re returning resources so that they can be better used or used again later. Let’s take a look at what a demonstration of this looks like. Of course, it is a demonstration. In this case, we’d normally start with an idea of the architecture and what we’re really looking at. I think like Peter said, it’s an interesting concept when you go from stateless orchestration where we start building things to make changes pretty quickly.
Rich Martin • 15:27
We go to stateful orchestration where we have to maybe sit down and think around more than just how do I automate Box X with these configurations. In this particular scenario, we’ll think about hybrid Cloud connectivity. We want to connect a server and a data center and a server in AWS Cloud. There’s two aspects of the infrastructure there. Then somewhere in the middle, could be virtual, could be physical, firewalls leveraged through Panorama. If we’re a network engineer and we’re used to dealing with architecture and infrastructure all the time, we can immediately think of all the things we need from the infrastructure side. I’m going to need some port or virtual interface on the data center side that connects to the server or the virtual server.
Rich Martin • 16:19
On the AWS side, I’m going to need a VPC, a set of routes and a lot of other things that attach to VPCs, network ACLs, security groups, so that it can connect to the EC2 instance there. Then on the firewall side, well, at the very minimum, we need a rule that allows access into that device from the outside. Those things come to mind very, very, very, very quickly. But especially in terms of defining a resource model, infrastructure comes fast. What about all the other things that we may not think about immediately? If I’m going to assign an IP address to this particular server, where does that IP address come from? In this case, it might be coming from Infoblox, or we might have an inventory system that’s on Netbox or something like that.
Rich Martin • 17:10
What else would be useful for us to associate with a service instance? Along with the infrastructure pieces. What about something from ServiceNow? You know, a change request ticket that we can open, but maybe we should keep track of the change request ticket. Or maybe ServiceNow also has a customer ID or a service ID that’s associated with this. So all of those things can now be can now be defined and created as part of the resource model for a service instance.
Rich Martin • 17:40
Now when we look at everything, not only can the Intentional Automation Platform automate all of the steps on the infrastructure side, automate all the tasks of gathering data, creating tickets, updating tickets on the ServiceNow side, on the Infoblox, Netbox side of things. But with Lifecycle Manager, we’re actually adding in this next piece of it, which is thinking about things over time. This is where we have our service states. Again, we need to sit down and not think of all of the information from the infrastructure and now data gathering pieces that would be useful, but also what are our service states? In this particular example, let’s just assume the very initial service state is provisioned, and then we can go from a service state that goes from provision to tested, from tested to active. I’ll explain this as we go through all the different pieces and parts of this within Lifecycle Manager. But then being able to go from active maybe back to tested or active to deactivated.
Rich Martin • 18:39
What does that mean to deactivate something? Eventually go to decommissioned, which is a return of all the resources, decommissioning all the infrastructure, returning the resources, and ultimately in Lifecycle Manager, removing that service instance completely. With that as the context, we’ll start to build this inside Lifecycle Manager. Well, it’s actually built, but I’ll walk you through everything, and we’ll be able to take a look at this in greater detail and see how all this fits together, how it works, and how you can use Lifecycle Manager and Stateful orchestration inside your own live environment. We’re starting in Lifecycle Manager. This is an application inside the Itential Automation Platform. Like Peter said, the resource model that you define, it’s the foundation of everything we’re about to do.
Rich Martin • 19:29
It’s useful to spend some time, not just a little time, but probably a lot of time, thinking through all of the details we want to capture from both the infrastructure, the data, from the different systems, and the different system states. We define this as a JSON schema. When it’s instantiated, it becomes a JSON object that identifies and defines that particular service. It all starts from here. I know for some, JSON can be a little intimidating. There are lots of tools, even tools within our own platform, that will help you craft a JSON schema, even from a JSON object. I’ve collapsed a lot of this just so we can simplify exactly what we’re doing here.
Rich Martin • 20:09
From our example, if you recall back in the slides, one of the very first things and probably the most important thing that I want to highlight here in Lifecycle Manager is the fourth dimension of time, and this is a service state. The resource model I’m just defining under properties, all of the things I want to track. I think also one of the things to point out here is that you don’t want to track everything. We’re going to take a look at this as we run a workflow to see how much information is truly available. It does take some thought. What are the most important things or what are the most useful things to track without going overboard? Because remember, with the Itential Platform, you can think of these as maybe pointers or references to be able to get more information if necessary.
Rich Martin • 20:53
But if this is information that is useful to have within the instance information itself, so you don’t have to do a lookup through a workflow, that could save you a lot of time. But one of the unique pieces of this is the service state. I’ve created a service state. It is simply a string, and it’s just going to hold a field of the current service as we use different actions to move, to create the service, to provision it, and to move it along its state until finally it’s decommissioned. You can think of a day 0, day 1 to day n in that regard. You can think of a day 0, day 1 to day n in that regard. Along with service state, maybe we want to have something on the ServiceNow side.
Rich Martin • 21:30
As a lot of you know, we can do a lot of more than just network automation, but real end-to-end process orchestration in our workflows. A lot of times, one of those steps is opening up ServiceNow tickets, updating ServiceNow tickets, all of the fun stuff that you would rather not do, you can automate those tasks here. Why not, at the very minimum, keep track of something in the ServiceNow platform, and this is maybe the original change request that was created in order to create this server. So we’ll automate the ability to track that. Along with some of the infrastructure, datacenter router name, that’s important to understand which datacenter router. This is just a string holding that, so we’ll pull that information from a workflow, as well as the datacenter interface, which will come from one of our data gathering steps or tasks to request that from a NetBox server. That’s also just a string.
Rich Martin • 22:24
This is hopefully a very non-threatening thing. The datacenter interface that we’re going to track is also defined as a type string. It’s more specifically IPv4, so it has to be formatted as an IPv4, which makes sense because this is an IP address. But you’re seeing what I’m really doing is I’m just defining all of the data bits that I want to track for this particular service. Those bits of data come from the service state, which is something that we’ve defined as part of our process. ServiceNow, data that’s gathered from sources of truth, inventory systems, IPAMs, actual information from the infrastructure itself, as well as things like the datacenter infrastructure, the AWS infrastructure, here we have VPC ID, security group ID, something that could be useful to track, especially for troubleshooting and maybe traffic can’t get back and forth. But we also have,
Rich Martin • 23:18
traditional firewall infrastructure here too. So maybe we’re also storing the current filter, the input filter rule for this to ensure that not only is it set right at the point of provisioning, but that we always know what it’s expressed as inside the service state so we can quickly look at it. So really the idea here is sit down, understand the infrastructure, understand the data and understand the information you wanna store and then just build the JSON schema to hold all of that. And because this is a JSON schema, you can organize it in different objects. So you can have objects for your data center resources. You can have an object that holds your security and AWS. You can build it however you wanna build it.
Rich Martin • 24:02
In this case, I just tried to simplify it and make it as simple to understand as possible. Once you’ve created your data model as the JSON schema, and you’ve saved it, the next step, if you remember back from Peter’s slide, is to use a series of actions in order to create an instance and to modify an instance over time. One of the things about Lifecycle Manager is, we want to be able to reuse workflows. If you’re familiar with our platform, a workflow can be, and we’ll take a look at one in a moment, but a workflow is how you automate and orchestrate on our platform. So a workflow can not only integrate with your network, traditional network, more modern network, through controllers, through APIs, things like that, but also all of your systems, your ServiceNow, NetBox, Infoblox, Teams, Slack, whatever it may be. So you really have the ability to not only define the resource information that’s useful to you from all of these different sources, and make it truly something useful for your team as you’re tracking these services over time. Now, when these workflows are created, we want them to be reused.
Rich Martin • 25:13
And you’ll notice here, we have three kind of unique buckets, if you will, for different actions, a create, update, and a delete, but you can have multiple examples of these within these buckets. So for instance, I have a single create. I’ve called this service provisioned, and you’ll see why in a moment. And it’s tied one-to-one to a workflow that has been created in the platform. And this workflow’s name is LCMW, which stands for Lifecycle Manager Webinar, as an identifier here, provision service new. So we’re gonna provision a new service here. Under update, you’re also going to be able to provide, and if you can start to understand this, these are your stages, if you will.
Rich Martin • 25:54
Under update, we’re assuming that an instance is already created under the create action, so we would use a create action to create an initial instance, and then we can operate on that instance using workflows that are attached to actions under update. Here’s one, and it’s a little out of order sorting by alphabetical, but under create, you can think of that as stage 1. Then we want to go from create or the provision to test it. Provision here would be, let’s grab all of the resources we need, and let’s go ahead and do a baseline configuration, and get everything set up. We’re going to grab an IP address in the next interface and associate it to a datacenter router for this app or for this service, and we’re going to create a VPC in AWS through an automation, through a workflow, and we’re going to grab that information. We’re going to set up some information for the firewall through Palo Alto, and we’re going to go ahead and set all of that up.
Rich Martin • 26:57
It is provisioned, but it actually hasn’t been tested end-to-end. The first step here in this action would be to provision, the next step after that would typically be to test it. This would be a separate workflow that you could to align with your testing procedure for that particular stage. The first stage would be its provision, the second stage after it’s successful will be tested. This could be something like spinning up an EC2 instance and a virtual machine on both ends, and testing connectivity through, could be something as simple as pings. However, that process needs to look, you would build that into a workflow and then associate that workflow with an update action and call it whatever you want.
Rich Martin • 27:37
That’s useful called tested. After something is tested, now it’s ready to be activated and actually put into production. Now, this could mean a lot of things. It could mean that the output of tested means maybe we deactivate certain parts of the infrastructure so it’s not working, but we have tested it. And then when we get to the activated stage, maybe that removes all the disables and allows the infrastructure to once again operate end-to-end, pushing packets end-to-end successfully. But it could do more than just that. Your workflow, again, this is an orchestrated workflow that we’re attaching to this action called activated.
Rich Martin • 28:15
That workflow could be built to also go into your billing system, and initiate that this customer is now activated and their service has been tested. It’s activated and we can start billing. It could go into an e-mail system and generate an e-mail to send to the customer saying, hey, by the way, your system is up and running. It’s been tested, it’s active, and it’s currently operating right now. Please go ahead and start using it. Or you could also go into, if this is an internal customer, it could send a message via Slack, right? We have the opportunity to orchestrate all of that inside of a workflow that gets attached to this.
Rich Martin • 28:53
The next stage you could take a look at that we’ve defined here is attached to a workflow called deactivate, and this is exactly what it sounds like. We’ve got an active service, but the next stage in its lifecycle might, may or may not, but it might need to be deactivated. This could happen for a number of reasons, and so you would define how to deactivate this, and maybe under certain circumstances, you deactivate it in certain ways. The idea here is that you don’t remove any of the infrastructure configurations and remove any of the resources and return them back. You’re just temporarily pausing the service for some reason. Then finally, the last service that we have defined here is under the delete actions, and this is decommissioned. This is where you actually are putting an end to the service.
Rich Martin • 29:40
Unlike deactivation, which is a pause with maybe disabling the service in some meaningful way but not permanent way, in the case of decommissioned, you’re assigning a workflow that does the job of unwinding all of the infrastructure. Remember, there’s an instance that has already been defined. When we are going to decommission it, the data from the instance that has been saved, that is defined by our resource model, now can be used within this workflow to quickly and efficiently start to not just disable, but to deactivate and decommission all of those resources. It can remove information from an interface. It can go into AWS and delete the VPC and all the associated resources around the VPC. It can go back to the Infoblox IPAM and say, ”Hey, we are now giving back this IP address or the subnet perhaps that has been associated with this.” All that information can now also be updated into a ServiceNow ticket or some documentation or record for that particular service. This is how you would create your workflows and associate them.
Rich Martin • 30:53
We’ll take a little closer look. If I click on the Create here, you’ll notice that we’ve got to start in an end, and there’s a couple of components to this. Remember, I talked about the workflow. The workflow is the heart and soul of all this. The workflow in this case is going to commission a new service. It is going to be responsible for generating the automations to… and to commission a new service across all these different network domains, as well as interfacing with all the different systems that we need.
Rich Martin • 31:25
But remember, the natural state of a workflow is stateless. In order to make this stateful, the workflow now has to take all of this information from the provisioning process and save it, and make it available to Lifecycle Manager in order for it to save the state. That’s what these transformations are for here as well. The output of this particular workflow is going to present this data to this transformation, and we’ll take a look at both the workflow and a transformation in a moment. But it’s going to present the state of the transformation so that it can be saved as a service instance. With that, let’s take a quick look at what a workflow actually looks like. If you remember the create workflow that I just referenced here, this is the commission the new service.
Rich Martin • 32:13
This is what that workflow looks like. What we’re generally looking at here is a project that allows me to organize and highlight very quickly all of the assets for this particular service that I’m going to be using for workflows and transformation. On the left-hand side, you’ll see I have a folder. Each one of these folders is for each one of these stages. If I open this up, these are all the assets that I can use now that are part of the create provisioning workflow. This is the workflow itself. Notice that the first step of this workflow is a transformation, and it’s also being used to gather some data that we need for the input from a user.
Rich Martin • 32:50
When we instantiate a service instance, we’re going to ask the user for some information. Ideally, we don’t ask them for a lot of information. We ask them for enough information so that we can derive all the other information that we need from that, because that’s really a better self-service model, especially if you’re looking at opening this up for self-service to end-users who aren’t as technical and probably don’t have information readily available on VPCs and datacenter ports and things like that. This transformation is going to ask for a very small set of information from the user when they create the new service. This next step here. is a child job that is simply going to create a ServiceNow ticket. We’ve done this in steps so it’s easy to understand.
Rich Martin • 33:41
It’s really a best practice because it allows you to build modularity into your workflows. You’ll notice this is called a child job. All of these steps in this workflow are available as tasks that you can get from this palette on the left-hand side. For instance, if I’m doing anything with ServiceNow, I have access to all the ServiceNow API methods available to me and their drag and drop tasks so that I can put them on the workflow and then transition from one step to another. In this case, I’ve created a child job, which is in and of itself another workflow that gets run from here. In this case, that workflow is called CreateNewServiceNowTicket, and it’s actually available inside of here under this miscellaneous CreateNewServiceNowTicket. I’m calling that workflow from here.
Rich Martin • 34:28
We’ll see what this workflow looks like when it’s done, but the idea here is modularity. Now, from the output of this particular task, we’re going to get some information that we need to store in the instance data. Keep that in mind. Each one of these child job tasks is a modular workflow, and the output of it is going to be some information or multiple bits of information that we need to preserve once this workflow is done doing its job. We need to preserve it and pass it to Lifecycle Manager because that’s the data that we’re populating into that resource model for this instance. So we go from creating the ServiceNow ticket, this particular workflow, child job runs a workflow that’s going to reserve IP address and interface to the next interface from two different sources of truth, Infoblox and Netbox. This particular workflow is going to do all the AWS provisioning. This workflow is going to do the datacenter provisioning using the information it has acquired from previous steps.
Rich Martin • 35:28
This will do the firewall provisioning. Then finally, this transformation is going to allow us to leverage all of the information that’s been returned from each one of these steps that’s necessary to populate the instance data that we were taking a look at. With that, let’s take a look at what a transformation looks like. You get a visual idea of how we map this data around. Because transformations are critical and key not only to our workflows, but how we also manipulate data inside of Lifecycle Manager itself. If I click on this, what you’re going to see is a way to map data between these tasks. Recall, I had multiple modular tasks that were doing something very specific, creating a ServiceNow ticket, generating some information that we needed to provision the data center side, the IP address and a new interface.
Rich Martin • 36:27
Going into pulling those IP information and details, creating Palo Alto rules. If you think in terms of each one of those modular steps, I mentioned that they’re going to output some data that we need to collect and organize together. The last step of this workflow for provisioning a new service is doing just that. I’m gathering the details, and not all the details, mind you, but the details that are necessary that we’ve defined in the resource model from each one of those steps and I’m mapping them into another JSON object that holds the information that’s now going to be passed to Lifecycle Manager as the instance information for this new instance. Now, all of this stuff is coming from different workflows in our task on the left-hand side. This is incoming data. But notice the one piece of information that I’m statically creating new here is the service state.
Rich Martin • 37:20
Because this is the create or the provision service step, there is no previous existence of this instance. When we complete this, we want to set this value to now provisioned. That is our first step in the service lifecycle, is all of the infrastructure has been properly provisioned. With that, let’s go back to Lifecycle Manager. We’ve stepped through all of these different stages and steps, and just like I showed you, we’re tying a workflow to a very particular action, so the create action is the one we looked at. Each one of these has its own unique workflow, and in some cases, transformations to ensure that the data that is output from the workflow gets saved back into the instance state. Because some of these will update the instance, some of them won’t, or at a minimum, it’s going to update the instance state itself, and maybe not some of the data, it’s within the instance itself.
Rich Martin • 38:20
So now let’s take a look at our next tab. So we talked about models, actions. Now let’s go to instances. You’ll see here that from our really general view, when we click on instances for this particular resource service, you’ll see we already have several demonstration applications created. You can very quickly see that they’re all in the service activated state because that was the last action. So in order to start using these actions, it’s probably easiest just to step you through how you would use these. So in this case, we create a new instance.
Rich Martin • 38:53
So I’m gonna give it the name application G since that’s the next one. Description is optional. This select action is, since we’re creating something new, it’s only going to give me the ability to select a action under that create category. So in this case, we only have one. It was called service provision. So it’s by default selected. Notice here that this is what I was talking about.
Rich Martin • 39:18
The very first step in this workflow is a form that allows us to now create dropdown boxes. So I can really make it super simple when somebody is using Lifecycle Manager and they’re creating a new service instance, instead of having free form text fields, I can have things like dropdowns here, right? So I can select the AWS region that I want to have the AWS infrastructure in, the data center region, firewall rule name. So I’m gonna call that app G rule in. And then this is a dropdown too. This is what service port we want to build for the firewall. So I’m gonna use 443.
Rich Martin • 39:54
With that, I can click Save, and you’ll notice that application G is now here, and the state has been service provisioned, so it’s going through that process. If I click here, I’ll see more details on this particular application. If I go to history, you can see that the first, so when I click Submit, it ran the workflow that was associated with the action, used the input data that I had, and now the state of that workflow is complete. What’s really cool is that I click on Complete, I can take a look at what was run and go straight to the job that was just recently executed and completed here. You’ll see that this is the same workflow I showed you, but this is a version of it that is now run. I showed you the workflow in the editor. This is something that’s actually run, and you can see with the check marks here, each step that’s run successfully.
Rich Martin • 40:48
It’s run through the entire workflow piece by piece. Let’s take just a quick look at some of these. If I click on this first step, remember the input data that we sent from Lifecycle Manager, and it gave me the form with the drop-downs. Well, that is captured and sent to the workflow. As soon as I click Submit to create that, this is the form data that gets fed back into the workflow. This is important to understand because As an instance data is saved in Lifecycle Manager, as you operate on that instance with those actions, remember an action is tied to a workflow, that instance data is now presented to the workflow as input data that it can operate on.
Rich Martin • 41:32
This becomes very useful because then you can do things like you immediately know the service now ticket or the interface that’s being used on a particular router for this service. All of that data is available to you as inputs when you start to operate using these other actions. The first step here was to grab the data from the form that we presented. Then you’ll notice here the second step is to create the child job that creates the service now ticket. Since this is another workflow that’s been run, I can open it and see the actual workflow that was created, and this workflow that was run. In this case, it’s really simple. It’s a single step that talks to our ServiceNow instance and creates a ServiceNow ticket.
Rich Martin • 42:17
Now, remember what I said about gathering information and understanding when to gather enough or too much information is probably going to be one of these exercises you’re going to have to walk through as you’re building these service models. A single create new change request in ServiceNow is going to generate this kind of information. Quite honestly, you wouldn’t want to store all that because most of this is probably not useful for you on a day-to-day basis. What are we really interested in? Really, we’re just really interested in the ticket number that gets generated. That’s what happens here. We’re going to query from all of this information, the specific ticket number, and then present that to the rest of the workflow so that it can be saved in the service instance.
Rich Martin • 43:04
Similarly, so that was that step. Similarly, this step is again another workflow that gets run. If I double-click here, this is going to get the next available IP address from, in this case, it’s going to be Infoblox. If I look at what’s returned from Infoblox, I get slightly less information, but really the only thing I’m really looking for here is this IP address. We want to extract this with this query step, which is what we do. Then this is going to get us the next available interface from NetBox. This is doing a call into NetBox because we’re using that as our inventory system for our datacenter routers, and it returns up quite a bit of information.
Rich Martin • 43:45
In fact, it will return all of the available interfaces unless you give it some stipulation to reduce it. But in this case, it’s given us all, and really all we want is the very first one. In this case, in our test environment, loopback 101 is the next available interface. So we’re going to query that out here. And at the end of this workflow, remember this is a child job that’s part of a larger workflow that gets run. We’re presenting that information back out to the parent workflow, because these are the details we are going to need in order to save that state. And this is the detail that defines the state of this particular service.
Rich Martin • 44:20
So just like those two steps, each one of these steps is going to run, do some automation on that particular step, in this case, AWS infrastructure, and return some information back so that it can be saved as part of the stateful orchestration that we’re doing in Lifecycle Manager. Let’s go back to Lifecycle Manager now that we’ve seen behind the scenes there. Let’s go back to our instance and application G. I’ll click on it again. We’ll notice that the workflows generated the automations to not only make the infrastructure, gather the data, but then we’re outputting all of this information. Here is the state of this particular service. We’ve filled out all of the information here from the workflow, we’ve saved it and captured it, and it’s stored in Lifecycle Manager as part of the service.
Rich Martin • 45:16
It might be easier to see it in JSON view. But one of the biggest things here is the service state. Recall that the first step when we create something, we’re setting the service state to provisioned. Now, over the life cycle of this particular service, things are going to change. In fact, it’s not even active right now. Recall in our example here, the next step that needs to happen is we need to test the environment, which is a different workflow that gets run and updates the service state. In this case, if I click Run from the Actions tab, it’s going to allow me to now progress the state of application from provisioned and really not working to tested.
Rich Martin • 45:52
When that’s done, if I click over to properties, you’ll see that the service state has moved on to tested. The assumption here is that we’ve done some end-to-end testing and we feel confident this is absolutely going to work, but it’s not been released to the customer yet. In order to release it to the customer, we need to move the state again over to activate it. By clicking this, we’re going to run another workflow. This action is a workflow that’s going to do the process on the back-end to activate it. Remember, I said this could be enabling an interface so that maybe the customer-facing interface has been disabled as the output of the tested when it’s done, but now we’re enabling it. But we can also do other things like update the billing system, we can send messages and e-mails to the customer to let them know everything is active and ready to go, and of course, finally, we do want to set the state to active.
Rich Martin • 46:44
Notice that every time we’re updating this, you’re seeing the service state move from the last action here. We always have a view of all of our different application services that we’ve created and their service state that they’re on. If this was something like an end customer with Internet access or something like that, maybe they didn’t pay their bill and we can deactivate it. Or in the case of an application, maybe there’s some maintenance and we need to deactivate that for a moment so the application is not available, or at least that particular server is not available. This could be removing it from a load balancing pool or something like that as part of the deactivation workflow. But in this case, of course, we’re moving it from deactivated to activated, and then once things are done, we can change the state back to activated again, and everything is good to go once again. Then finally, as the final state of everything is to decommission it.
Rich Martin • 47:43
Everything we’ve done up to this point has been, especially with activating and deactivating, is not removing anything from the network or any part of the infrastructure or any of the IPAM or inventory systems. We’re just pausing it in some way so that the actual service doesn’t function. But when we go to decommission and click run here, this workflow will be responsible for removing everything from the infrastructure as well as the different systems that might be holding the records for IP addresses and things like that. Then eventually, LCM, because this is part of a delete action, will remove it from its own internal database of state. You see it’s actually disappeared from the current list because it’s no longer really active in the system. However, we still have a full service history, and this is something that’s really awesome that’s available. You can see from the time it was initially provisioned all the way to every state change that occurred, in fact, including activated, deactivated, reactivated, and then eventually decommissioned.
Rich Martin • 48:47
So you have all of that available to you, as well as historically, you can look at it through here if you show the deleted from it. Not only that, even from the history, you can go back from the job level and take a look at all of the step-by-step events inside of a workflow to ensure exactly how everything transpired in the workflow itself. You can see the input and the output. Everything I showed you when we looked at one of these workflows is available historically as well. It gives you full access even when something is completely decommissioned, to understand its lifecycle at every step, at every detail down to the task level itself and inputs and outputs. brings us to the end of our demo. Hopefully, that helps you to bring together both the context of what stateful and stateless orchestration is, why it’s so useful, why there’s a need for both stateful and stateless orchestration, and how within our platform, you have flexibility to leverage all of our integrations and our adapters in order to build an orchestration in a workflow that can now be leveraged initially statelessly but then turned into something that can be used statefully as well for those opportunities and those circumstances where you want to track the state over time.
Rich Martin • 50:12
I think with that, hopefully, we’ve showed you how you can progress very, very quickly for even your simplest to more complex use cases, and leverage this powerful and awesome application in our platform for your business needs. With that, I want to say thank you, Peter. Once again, thank you very much for joining me and keeping me company throughout this webinar. I want to say thank you to all of you for joining.