Getting started with automating network infrastructure requires a logical, step-by-step approach. You should start simple with a relevant use case and translate the process into a series of logical tasks. Then, you can build out integrations and surrounding processes to ensure it works with your infrastructure and meets your standards. Last, you will implement your tasks until the workflow is complete.
In the final installment of the Building Network Automation Workflows with Itential demo series, we conclude by showing you how to publish a workflow in Operations Manager to extend the use of automation across your organization both for Itential users and the networking team’s end users. We explore the four different trigger methods that initiate these shared workflows and how each can work in your environment.
In this demo, Rich Martin, Director of Technical Marketing at Itential, shows you step-by-step how to:
- Publish a workflow for self-service by another Itential user within the platform.
- Publish a workflow with an API endpoint to run outside the Itential platform such as end users in ServiceNow or GitLab.
- Publish a workflow on a schedule for recurring tasks.
- Publish a workflow to respond to an event for immediate execution and response.
Demo Notes
(So you can skip ahead, if you want.)
00:00 Introduction & Demo Overview
02:21 Workflow Review & Changes to Operationalize for Self-Service
07:36 Operations Manager Overview
09:20 Create an Automation Entry in Operations Manager with Access Controls
11:50 Overview of Event-Driven Automation
14:16 Overview of Scheduled Automation
17:42 Overview of Manual Automation for Self-Service
19:47 Overview of API-Driven Automation for Self-Service
30:30 Demo of ServiceNow App for Self-ServiceView Transcript
Rich Martin • 00:00
Hello, everyone. Welcome again. My name is Rich Martin, Director of Technical Marketing at Itential. Thank you for joining us for our part 5, the final part of our webinar series on building network automations, where we focus and discuss how to and why you would want to publish and share your automations. So it’s always a good idea to look back to see where we’ve started and how much ground we’ve covered to enjoy this journey. So like I said, we’re in part five but we have covered a lot of ground. So I wanna thank you so much for joining us through all of that. Everything from the introduction to the canvas to how to create and kind of map out your use case and the logical steps, everything from that to actually building and spending lots of time building and testing and evaluating and adding more to your workflow.
Rich Martin • 00:55
And now we’re getting to the point where you’re gonna take our workflow and publish and share it. What we’ll be doing today is on the right-hand side, this looks very familiar. We have a workflow we’ve been working on. It has integrations to ServiceNow and NetBox. It integrates and works with a Cisco CLI device to make a change. But in this case, we’re going to take our workflow and we are not going to spend a lot of time where we were in the platform previously in Automation Studio, where we’re building and testing. We’re going to spend more time today in another part of the platform called Operations Manager, where we take the workflow and turn it into an automation that we can publish and then define how we want the methods we want to use to run that workflow.
Rich Martin • 01:41
We’ll be doing that. I’ll be showing you the four different methods or triggers that can be used to run a defined automation and then we’ll talk about how this helps the automation become more valuable as you open up the opportunity to do self-service for network teams and self-service for other teams, teams outside of the networking team. Doing this through APIs and then something special we have called ecosystem applications. So with that, let’s jump over, let me share my screen, and we’ll get into the platform. Okay, now I know this might be a little hard to see, we’ll zoom in in a minute, but what I wanna show you is we are in Automation Studio, we won’t be spending too much time, we’ll just be here briefly, but we now need to take the workflow that we created. So this is the workflow from our last session that we created, we modularized, and now we need to operationalize it. And I want you to think about it in this way, Automation Studio is all about building and testing and automation, and we’ve been doing this solo.
Rich Martin • 02:49
But now we need to operationalize this workflow so that we’re going from just us using it, just a single person, a single execution of this automation, to now allowing more people, multi-user execution of this automation. This increases value, if there’s value, if there’s value assigned to this automation when I run it as a lone person, imagine the value that we can reap when we allow more than one person to use it. So maybe your team of network engineers, or you’re opening up to your network operations team so that they can, it enables them to become better at addressing or provisioning things that come in on a day-to-day basis, thus saving you maybe a lot of backlog and then increasing the value to the end user. whether that’s internally or externally to your company. That’s what we’re talking about here, operationalizing it. Let’s start tactical. We created this workflow and I want you, I’m going to flip between my first two tabs here, because what I’ve done in this first tab is taken the workflow that we’ve built, and then I’ve operationalized it so that it can be used in Operations Manager to be published and shared.
Rich Martin • 04:02
The second tab, what did you see different here? You’ll see this first task has been removed. You see that? Let’s start here. Let’s talk about that for a second. We removed this task, so what was the first task? Let’s zoom in a bit. And our first task from our last workflow here is that show JSON form.
Rich Martin • 04:24
Remember, this was a task that essentially stops the workflow when we execute it and brings up a form that the end user wants to fill out. Now, when we are testing this and we’re running it, certainly this is useful for us. But now when we want to operationalize it, we have to think a little bit differently. Do we want the very first step in the workflow to always be to show a form and interrupt the workflow? So it’s no longer a zero-touch workflow. It requires manual intervention. In some cases, yes, but in other cases, no.
Rich Martin • 04:52
And we’ll talk about what those cases are when we publish this automation. So what we want to do now to operationalize this is remove this task so that it doesn’t cause as a first step to show a form. But you might ask the question, well, OK, well then, where does the data from the form come from? Well, it can still come from a form, but it’s not being run in this automation. And by clearing out this first task so it doesn’t stop the automation from executing and wait for a manual intervention from a human, this now operationalizes it so that data can be passed in to this workflow from another source. That source could be a form. That source could be an API call into the system.
Rich Martin • 05:30
And now we’re operationalizing it. So if we remove this task, that data has to come from somewhere, which we’ll address when we get to Operations Manager. And the second thing is, we’re working on that data. So we reference that data usually in a transformation. And that transformation is right here. So the two things that I’ve done ahead of time here is to remove that front task, just delete it from the workflow like we’ve done several times. And then the other thing that I’ve done here is updated this transformation task.
Rich Martin • 05:59
So this is what it looks like now. If I double-click it, you can see the variables. And remember, I’ll show it in a moment. This used to come from that task that used to be here. We were referencing variables from that task. Now I’m referencing form data, but instead of a previous task’s variables that I’ve deleted, now it’s coming from a job. So what does that mean?
Rich Martin • 06:21
That means that when this is executed as a job in Operations Manager, it’s going to look for these variables to be passed in externally. And this is really how we operationalize it. Now, could we have done this from the very beginning? Could I have always run this from Operations Manager manually? Absolutely. But I wanted to show you how all of this works and how operationalizing it works. And you may still, as you build automations within Automation Studio, you may still choose to do it this way because removing a task and updating the source of a variable is very, very quick and easy.
Rich Martin • 06:56
As you’ve seen, we’ve done this over and over and over in our iteration of this workflow. So I wanted to show you that originally, this is what it originally looked like. This was a transformation task. It still had this form data object that it was requesting that data from to work on, but it came from a previous task and not from the job. So that’s what we changed. Delete the task, update the transformation so that it references it from outside. And then once we have this workflow built and it’s ready to go as a final operational workflow, then we move into another area of our platform called Operations Manager.
Rich Martin • 07:33
Now, quickly, Operations Manager is where we publish automation. So we’ve created a workflow. Now we want to publish this as an automation. You’ve actually already seen the other side of Operations Manager, which is managing jobs. So a job is an automation that’s been run. And then so you can run jobs and then manage those jobs. So I’ve showed you that.
Rich Martin • 07:54
Whenever we’ve tested our automations from Automation Studio and we’ve tested those workflows, they were running as a job. And then when we manage those, we double-clicked into those tasks that were running. We saw the variables, we copied and pasted them. We use that as part of the build process, creating transformations. We did all of that through the job section of Operations Manager. But now we’re going to focus on the automations side of Operations Manager. So if you look on the list on the left here, you’ll see a lot of these automations that have been published.
Rich Martin • 08:28
Think of this as self-service for your team. Or any group operating within the Itential platform. If my user account has access into this, then I have access into either viewing or running these published automations. Just like we created a workflow, we’re going to create a publish and automation that will now show up in this list. These are things that already exist. Now you can see there’s this great catalog of different network and infrastructure automations that I have access to that we can use on a day-to-day basis. If my team has access to this, I can now create essentially a catalog here for them to do to run automations and do their work on a day-to-day basis.
Rich Martin • 09:16
So let’s now create a new entry into this automation. And very similar to other areas, we hit this create button here, and it’s going to ask us a name. We want to do webinar, what do we call this? Webinar Build a Workflow, okay? We could put a description here. And we click Create. Now we’ve successfully created an entry here.
Rich Martin • 09:48
You’ll see at the bottom, this is where our entry webinar Build a Workflow now exists. But of course, it’s not really doing anything yet. The first thing that is required of us is to select the workflow that we want to reference. And of course, this is going to be the one we’ve been working on. So I’ve called it Build a Workflow Final. So this is that final one I just showed you that we’ve operationalized. So now we’ve done that.
Rich Martin • 10:14
And you’ll notice here that we can, here’s the description we added, but we also have the ability to create some authentication and access around this, right? So this is critical, because as you start to publish workflows, you want to maybe give some workflows access to certain groups and other groups not. As my account here is admin, but I can specifically tie this back into admin, the admin group. that’s defined in the system. And as part of managing the itential platform, you have the ability to define locally groups or integrate with your LDAP system. And then we’ll learn those groups and now they’re accessible here. So you can start to create groups, identify users with those groups, and then limit and create access to what automations for what groups you want to provide.
Rich Martin • 11:08
So if I wanted to give the operators, the network operations team, access to this, I could certainly do this. They could view it or they could actually run it. And I could just save my changes from there and that’s gonna give me some ability to do some access control and authorization on this particular entry. So that’s one piece that you get, and you really want to leverage and utilize in Operations Manager. But how do we actually run the workflow? We’ve referenced the workflow, we’ve created some rules around who can use the workflow. So that is done through creating a trigger.
Rich Martin • 11:44
So think of this as a method as to how we want to run this automation. And there’s several different methods, and I’ll show you them very quickly here. If I click the plus button, it’s going to open up this dialogue here on the right. Of course, we need a name. The first one I’m going to show you is maybe one that’s important, but maybe more advanced. I’m going to call this event. Test.
Rich Martin • 12:11
And this is going to be an event-type trigger. And I’m going to fill that out to avoid a pop-up. We’ll talk about JST in a moment. But that’ll avoid a pop-up that’ll pop up to warn us about something. OK, so this is a type of event trigger. It’s going to ask us the event name. So think of event trigger as how we want to respond to an event.
Rich Martin • 12:35
So what is an event? An event is either an internal or an external event that is available within the Itential platform. So you’ll see here there’s a lot of internal events that exist already as part of our own platform. And then there are some external events through integration. So there’s some email adapters through NSO. And of course, as you build more and more integrations in your systems, you can create more, integrate with event systems, and they can appear here so that you can choose them. So what you want to do is you want to choose the type of event.
Rich Martin • 13:10
So we have an area, you know. maybe a failed trigger and operations manager. And then the next thing you want to do is you want to define the payload. So when we’re connected to an event, we can receive a payload and then we can create a schema. And this you’ll see as a JSON schema to define the payload that matches the event in order for us to respond to it. So this is how we do some filtering because especially if you worked with any kind of event notification systems, there’s all kinds of messages and you want to be very, very specific to what events that you want to respond to. So this is where you would create that payload filter schema, it’s a JSON.
Rich Martin • 13:53
So again, we’re using JSON all over the platform and you can define it here. And this is probably one of the more advanced methods of doing it, but certainly is an important one because being able to respond quickly to an event, especially something around security is incredibly important. So that’s why we give you the opportunity to use this method. So that’s the first one. The second one I’ll show you is a scheduled event. And this is probably a more relatable and easier to understand for a lot of folks, especially if you come from a network engineering background. So I’ll just call this one a schedule test.
Rich Martin • 14:29
So in this case, schedule is the event type, or the MSR, not event type, the trigger type, or the method here. And this is just running in automation at a particular point in the future, and either optionally repeating or not. So in this case, we could select tomorrow at a certain time. At this time, we could repeat a certain amount of days, seconds, months, weeks. However you want to do it. Process. So if we miss a run for some reason, process it.
Rich Martin • 15:05
And then we can optionally attach a form to it. And you’ll notice that it identifies all the forms, including the form that we’ve been using before. Because form data has to come from somewhere. So if I select that form, you’ll see that it’s pulling up the form that we created for our example workflow. Ideally, though, you don’t use a form for this. And so where would you use a scheduled method? Think about in terms of a compliance check or an audit that you want to do.
Rich Martin • 15:38
Whether or not even you’re doing some automated remediation, you’re letting the automation remediate things. Even if you don’t do that, it’s useful to do that to generate reports or anything. For instance, perhaps you need to see some weak, you need to call some data from a lot of different places on the network, routers, switches, things of that nature. And you want to pull that data together, maybe even to generate a report. You could do that, or you could pull data on a weekly, daily, even hourly basis for some critical thing and send it off. Remember, because of our ability to integrate with all kinds of systems in your IT environment, think about pulling data on a schedule from your network and then sending that out to email or Slack or Teams or whatever chat system you’re using. Even opening up a ticket in service now or JIRA, updating an inventory system like NetBox or even just a general database maybe that you’re using to store data or sending it to a telemetry system.
Rich Martin • 16:39
All of these things are possible in our platform, in a workflow as we’ve seen through our integrations. And then you can use that to immediately or send them all to all of these different outputs if you wanted to. And that’s a great way of using the schedule method for an automation, just so that you can run things on a repeating fashion or in the future in an automated way. I can save that and you’ll notice every time I save it, it’s creating an entry. Let’s see, the third way we’ll talk about is manual. What is manual? Manual is exactly what it sounds like. Remember when we first started just a few minutes ago, we had to operationalize our workflow.
Rich Martin • 17:24
In order to do that, we had to remove the JSON form task as the first step, and then update our transformation to reference data outside of the platform. That data is still the same form data we need to capture because that was utilized in that transformation to generate a set of CLI commands to push to a Cisco device. Manual will do the same thing. This is what we talked about. This is why we operationalized it. I’m going to create a new method here called manual test. I’m going to select manual here, and you’ll notice it’s asking me for one field when I select manual, what’s the form we want to use?
Rich Martin • 18:03
Now if I pull in this build workflow one form, this should look very familiar. When we were building that automation workflow as that first task, when I ran it, it immediately, we had to go into the job manager, and I had to basically work that particular task, and it popped up that form, this form here. Now I’ve associated that same form with this workflow as part of this manual method of running this workflow. And so now that, because I’ve done that, this becomes the operational way. to start running that workflow. Again, Automation Studio, where we have been spending majority of our time is for building and testing workflows, but when we’re ready to publish them, a whole other group might be, ideally, a whole other group and a whole other team of people, and not just us, can now have access to run this. That’s why we want to operationalize it.
Rich Martin • 19:02
That’s what this does here. Now that we’ve assigned that form, you’ll notice that form is exactly what we’ve been using to test with. I can select that. It asks us for our description. If I click Run Manually, it would run this just like we’ve been running it. Now though, we have access and control around it, who can run it when we publish it, and we still get all the auditing, all of the logging, all of the variables, the entire execution task-by-task of that workflow is available in Job Manager, and it tells us who’s run it. You get all of that when we publish this and we create more of the self-service way of operationalizing our workflows.
Rich Martin • 19:45
That’s manual. You’ll use that a lot, and in fact, you’ll start building based off of both who can read and write this particular entries and publish automations. You’ll start building these catalogs for different teams and different groups, or maybe even different individuals. We’ve done manual. The final one, think about manual as self-service for your team, internal to the platform. They have access to the platform, they log into the Art Itential platform, and now we can publish these workflows so that they can use them. The last one that I’ll show you, we’ll call this API test. and similar to the last time, I’m going to just go ahead and fill this out and I’ll describe it in a minute. We’ll spend a little time here on the API method of running a workflow.
Rich Martin • 20:37
When I select API, think of terms of now, how can we give access in a very secure and controlled way to Teams, applications, or even other platforms outside of the Itential platform? You have Teams that are using the platform internally. Manual is a great way to publish those for them to use. But what about Teams outside the platform? There’s a logical progression here. We’re talking about self-service, and we’re talking about self-service, and we’re talking about self-service. We’re talking about self-service, and we’re talking about self-service.
Rich Martin • 21:10
First of all, it’s just individual use of a workflow or an automation. Now we’re self-serving to maybe our colleagues on our team, and now we’re self-serving maybe to other network teams like the operations team. And now we’re talking about self, and maybe all those folks are using our platform, but what about teams outside of our platform that still need things like network infrastructure? They want to be able to order it, and we should be able to deliver it in an automated way so that it’s available really in a cloud-like experience. And so API is one of the two ways that we can expose our automations, and like I said, in a secure way externally in other systems. And so what this really means is just like we integrate through APIs, Itential has APIs into itself, and so this allows you to create an API endpoint for this particular automation. And then given the right credentials, allowing some external system platform user application to call back into it and run the automation as if they were running it inside the platform.
Rich Martin • 22:14
And this opens up self-service beyond the Itential platform, so we really can start to talk platform to application, to our platform, or even platform to platform, which we’ll look at in just a minute. In this case, when we’ve created this, we’ve got the name, I’ve called it API test type API, action post because this is a RESTful API. There’s a post action here because we want to be able to accept data passed in through the REST API call. We’re using post here and this is critical because as part of securing this, it is about having the right credentials and authorization, which we provide in the system. In order to make this call into the system, you have to have the right credentials, you have to know what the endpoint is, and that’s definable here. We can call this webinar. build, for instance. So now this becomes the endpoint, I can copy it here so that we can test it, access to it. But you still need authorization credit credentials, and your credentials have to have access to this workflow, as we just defined a moment ago, where we added the operations team to this. So you see, there’s multiple levels to this. And just like running a workflow manually, or in any other way here that’s being published, honestly, you still have full access to auditing and logging and all of the data and information that you would need to ensure that the right people are running the right automations and seeing what the exact results of those automations are. And of course, that’s also available under jobs in operations manager. And we’ve seen a lot of that just building it. But that’s where everything is kind of congregated together.
Rich Martin • 23:52
So you get all of this when you publish in our platform. These are all things that are absolutely necessary. Okay, so there’s another thing that we want to do here as well. And if you look here at this post body schema, this allows us to put up very, very specific guardrails on the data that gets passed in from the API call. So if an API call comes in to run this automation, it has the right access credentials. So, and it has the right authorization to this workflow entry or this automation entry, then it’s gonna pass data to us. And once it passes the data to us, we have an opportunity here by defining yet another JSON schema, because we’re using it all over the place, but this JSON schema defines what the data needs to look like that’s being passed in. And if it doesn’t match this schema, guess what? We won’t run the automation.
Rich Martin • 24:46
So this is a form of input validation. And it’s a very quick way of being able to do this, because if you go back and remember when we created a form, for instance, when we started building this automation and we created a form, we used our form builder and we could generate the underlying JSON schema for the form that we built. So we built a form in a visual way, we generated the JSON schema, we leverage that to infer the inference from that to build a transformation. And all of these tools can actually also be used here as well, so that we know what the form looks like, we know what the fields are, we know exactly what to expect. Let’s strictly define what the input looks like. That way we reduce error, we reduce mistakes, and it gives you another tool to ensure that the data that’s coming in is as accurate as possible so we can have a high degree of confidence that our automations are gonna run successfully. So by default, you get a very general generic one here, this basically says we’ll accept essentially anything without any tightly defined object here, but really, and that’s great for maybe testing, but when you come down to it, you really want to tightly define
Rich Martin • 25:58
your input schema so that it defines the data that needs to come in. In this case, I’ve updated it. This is based off of the form we’ve created, and all this says is we need an object. We expect an object for an API call calling this particular automation. We’re expecting an object, it should be called FormData, which is what we’ve been using. Remember when we operationalize that automation, we said, hey, there should be an object called FormData, and that object should have two properties. One of them is network device, that is a type string, and interface description as a type string.
Rich Martin • 26:38
These are absolutely required, and there can be no other properties attached to this JSON, the data payload that comes in as a JSON. This tightly defines it. Now we can even more tightly define it by specifying which network device, the entire list like we had before, of network devices that are applicable here, we could go that further step. Or we could do something like interface description could have a regular expression attached to it that has also be passed. So this gives you input validation, like I said, to make sure that the data coming in. Not only are they authorized through credentials, they have access to this particular automation entry, but now the data actually coming in passes this filter, and it meets all the requirements just like a form would, just like we’re strictly defining the data that comes into a form. You can do this from here as well.
Rich Martin • 27:34
Then finally, JST, I mentioned this in the event, and it’s used in the same way. When you see JST, that’s a transformation. So we spent a lot of time building transformations. JST, JSON Schema Transformation, it’s taking a schema coming in and a data payload coming in and then mapping it and even manipulating it and mapping it to data going out. You can leverage a JSON, JST, a JSON Schema Transformation or a transformation here as well by identifying it here in this field, and then that can be used to now manipulate the data that’s coming in so that you can map it to values that might be further used in the automation workflow. And again, this is a way of decoupling and operationalizing things so that you can provide the data that’s needed in a workflow in as flexible a way as possible. So now you could leverage a transformation here to do what we’ve done in that transformation step if you wanted to.
Rich Martin • 28:38
Thank you. So if we save the changes here, that’s pretty much it. We’ve been able to define the automation, identify the workflow that we’ve built as our final workflow, create different methods of testing, not testing, but different triggers or different methods of running them based off events, schedule, manual, or API. So this opens up a world of self-service inside the platform and outside the platform. We have the ability to disable everything at once or individually if we needed to. And this is really super flexible and gives us a lot of power and control over the automations that we’ve spent so much time building and testing and iterating off of. And now this is kind of that final step of self-serving out.
Rich Martin • 29:29
And so one last thing I want to show you is, we talked about the API test being kind of the gateway to integrate into other applications and, you know. different tools, your DevOps teams can now access infrastructure creation through an API that you publish. Now they have access to spin-up network, perhaps network infrastructure that you’ve built the workflow for. But there’s also another way where we can start to integrate with platform to platform. We call that applications or ecosystem applications, and they’re a little bit different. We’ll talk about service now because that’s our first one. But really, when we talk about APIs, we’re really talking about external self-service.
Rich Martin • 30:14
How can we allow other systems outside of Itential to access our automations and run them in a safe and secure way? Well, applications gives you another more streamlined method of integrating with a very specific platform that you may be using in your environment. ServiceNow is clearly one of those. let me log back in here, an ecosystem application is something that you would install on the other platform to give you access into the Itential platform to run automation. So in a way, it’s similar to APIs, but it’s more streamlined. It is something that Itential writes and publishes in that ecosystem storefront, or whatever their storefront is. So if you go to ServiceNow, if you’re familiar with ServiceNow, they have an app store.
Rich Martin • 30:59
If you go to the app store and search for Itential, you’ll find our application. It has been validated and verified just to even be in the ServiceNow store by ServiceNow. So it’s credentialed by ServiceNow. It’s installed in their platform. And with a little bit of configuration, now it exposes the automations that we have published in Operations Manager into the ServiceNow platform and makes it simple now to do self-service in another platform. I’ll show you that. This is our ServiceNow instance. I’ll show you from the main menu or the main dropdown here.
Rich Martin • 31:36
Once that application is installed, you have access as admin. I have access to several other new Itential items here. But the one thing we really want to take a look at is this Itential Automation Services. This is the result of installing the ServiceNow app in ServiceNow that connects back to Itential and then configuring the credentials for Itential service to access the Itential services in our platform. Once that’s done, we can connect to an instance, and the instance we’re using here for this demo is this one. Once that’s done, it does a live API call into our system and the workflow that we’ve published as an automation should appear here. Remember, it was called webinar, build a workflow, but it’s not.
Rich Martin • 32:24
The reason why is because we haven’t given it authorization. While we have access to talk to the system, the account that I’m using in ServiceNow doesn’t have authorization. We can do that here by updating this. The account that we’re using in ServiceNow, it belongs to group EAS, demo as a service is what we call it. I save this here. Now, if I reload here, we’ll reselect our on-prem lab. Now, when I select this automation, our webinar build a workflow as a service exists, and it pulls up the form.
Rich Martin • 33:04
It’s technically using the manual method that we’ve published because it’s associating a form with it. Because what we’re actually doing with this application, and this is what makes it so streamlined, is our app is written so that not only does it access the list of automations that are published that we have access to now, but it also pulls the associated manual forms that were associated with the manual entry for this, and then it builds the forms in the ServiceNow platform so that it can be populated and run manually. I won’t run it here. We’ve run this 100 times in our series here. But you can see now this opens up self-service inside of ServiceNow, given I have the proper credentials and authorization that are defined in Operations Manager, and a portion, of course, is defined in ServiceNow as well. With that, I just want to say thank you so much. I know we’ve covered a ton of territory, but hopefully this has been valuable to you.
Rich Martin • 34:03
Hopefully, this is something you can reference in the future if you want to go back and look. Of course, reach out to us if you have any questions. You can contact us with the information here. We look forward to future webinars and helping you guys out in your automation journey. Thank you very much.