Transcript: Managing Adaptive QoS Dynamically with Ansible

February 19, 2020 by No Comments

AQoS Automation Using Ansible

Faisal Salam: Hi everyone. My name is Faisal Salam, I am a Senior Storage Engineer with NetApp IT. I am responsible for the design and deployment of new NetApp systems in IT, I also lead the storage security program for Customer-1.

Victor Ifediora: And I’m Victor Ifediora, also a Senior Storage Engineer with NetApp. I’m responsible for automation, so we create a lot of automation that we use to manage our storage. I’m also responsible for all the backups that we do in NetApp. We are part of Customer-1, so every new product that NetApp produces, we are the first people that use it, install it and put it in production before other customers begin to get their hands on it.

Agenda

Faisal: For today, we’ll be talking about IO Density, what it means, how it’s important. We’ll also be discussing about the Service Catalog, what the various service tiers are and how they’re defined in our environment. Then we’ll take a look at Adaptive QoS and how we implemented the Service Catalog using Adaptive QoS. We’ll also look at the details about our initial Adaptive QoS deployment, the challenges we had with that first deployment, and then we’ll take a look at our current implementation using Ansible and how we leverage the ONTAP modules for Ansible to deploy AQoS

At that point I’ll turn it over to Victor who’s going to do a demo of how all the Ansible playbooks look like and how they run. And as we go through all this content, I strongly encourage folks to ask any questions, you can stop us with any comments that you have. 

​Are you familiar with QoS in ONTAP?

Faisal: Alright, before I proceed I have a question, so how many people on the call are already familiar with QoS that’s specific to ONTAP? Meaning do you have any customers that are already leveraging it in any shape or form? 

CHAT: Not me.

Faisal: Let me look at the chat. No, I’m getting no. All right. So Matt is using Standard QoS rather than AQoS. Okay. You will find this presentation highly informative. 

Challenges with deployment strategy

Faisal: Okay, so before we leveraged QoS, our primary focus when we were deploying filers was capacity. If somebody wanted, say 10 terabytes of storage, they just tell us, “Hey, deploy so much of storage.” And we would go about having 10 terabyte volume in whichever cluster there was space available. There wasn’t much thought put into the performance aspect. Now as a result, we ended up having different application volumes that were sharing the same storage, and this resulted in inconsistent performance and issues that were reported to us by the application and database teams almost every week. I’m not sure if any of your customers or partners have had similar issues being reported with inconsistent performance at least.

Eduardo Rivera: Sorry to interrupt, Faisal. It seems that not a lot of people really use QoS, and what you’re talking about here is how people are focusing on or maybe how we have focused on capacity in the past without the performance control. So I guess my question is, if anybody has any anything to add to this? Do your customers think about performance? Do they care about how they, perhaps in a large environment, one workload may trump others and have all the resources allocated? Because I think it goes to the core of the question that you just asked. Like, are you using QoS and are we thinking about this in the context of our deployment strategy? I guess that’s a question for the group: Do your customers ever express these needs or concerns? I guess not. All right, we’ll go with our story. 

Elias Biescas: We must be very good then.

Faisal: Right. Yeah. So what we realized is at that point we did not have any real way to guarantee performance and add to that, there was also a situation where application volumes with low IOPS requirements, were provisioned on expensive storage and thereby were wasting capacity. So we needed to figure a way to do two things: one, guarantee a minimum performance to the application workloads, and we also needed a way to limit workloads to consume a certain number of IOPS based on a service tier. 

​Need for change

Faisal: So the challenges that we spoke about, it drove the need to define a storage as a service platform where we would move away from building customized arrays. And that meant we had to configure our storage to offer varying levels of performance and capacity. Now we consulted with enterprise architects and we also went through the service design workshop. We developed a Service Catalog leveraging two main items, IO Density and Quality of Service.

Let’s look at IO Density. So IO Density is a measurement of IO that’s generated over a given amount of storage capacity, and this is expressed as IOPS per terabyte. In other words, it measures how much performance can be delivered by a given amount of storage capacity. So to gather, analyze, and visualize the IO Density of all applications in our environment, we used OCI as OnCommand Insight and generated IO Density reports, and we had some very good findings. So some applications that we thought were high-performance were demonstrating very low IO Density rates, and those were essentially wasting high performance storage. We also saw the reverse, so applications were using low-performance arrays, when their actual IO requirements was much higher. So based on the IO Density reports, we defined service tiers using IO Density as the defining factor. Now we look at this on an upcoming slide. 

Now, what we did next is we implemented the different service levels by leveraging QoS that was available in ONTAP. And QoS made it possible for us to control throughput based on either IOPS or megabytes per second. It works at the SVM volume file or LUN levels and the throughput threshold themselves are defined within policy groups. And so a separate policy group had to be assigned per volume, and the procedure was really straight forward when we first went about doing it. Any questions regarding IO Density or QoS in general? 

Elias: You mentioned that to measure the IO Density you use OCI. Cloud Insights, which is also free if you use it on NetApp solutions, could that be used the same way?

Faisal: Cloud Insights cannot be used to report the same way as the traditional OCI at this point, but I’m sure that the feature that’ll be coming up in the future. Victor, did you have any comments on that, does Cloud Insights have the reporting ability now. 

Eduardo: Yeah, I’ll comment on that. So we thought about OCI because OCI is the tool that we were using on the time, and we still use it. The OCI has a part of it that it allows to do essentially batch reporting. You have a part of it that allows you to use the data warehouse to build reports. And while those reports that we talked about is the IO Density report, that essentially takes all the performance, the information that OCI collects, also the capacity information and gives you a per volume view of the IO Density, and then formats data and makes it look pretty, so you have something to look at. Cloud Insights has the capability to do this same type of calculation through what it implements as dashboards today, sort of like more on-demand view. 

However, there is also, I believe embedded on IO Density metric now, on all graphs or they’re out of the box, but when it comes down to the data warehouse and the batch reporting that we’re using with OCI, there’s a feature that is about, I believe about to be released within Cloud Insights. It’s not fully available today, but it’ll be available momentarily. So we should be able to do the same type of OCI IO Density report using Cloud Insights. And yes, in my understanding is that, as long as Cloud Insights is used for managing NetApp devices only, it’s free of charge to use. 

Elias: Thanks a lot Eduardo. 

​Storage Service Levels

Faisal: As I just stated, we use the IO Density reports to identify the range of service levels that applications in our environment actually needed. And using this information, we created a service catalog based on four standard service levels. There’s one for value, which is workloads up to 512 IOPS. There’s performance, there’s high write and there’s extreme that goes up to 8192 IOPS per terabyte. Now, the way we implemented it in our environment is using the aggregates, and the aggregate names as you see in the examples, represent the defined service tiers. Any volumes that are created are moved into those aggregates in a specific tier, will be automatically assigned to the appropriate policy groups, the QoS policy groups. Based on our understanding of the application requirements in our environment, the above tiers would address almost 99% of our install base. But there were exceptions and those were treated on a case by case basis, and we created other policy groups with the different threshold for those one-off applications.

Adaptive QoS (AQoS)

Faisal: Ordinarily, the value of the policy group that we spoke about, that’s assigned to a volume and the thresholds are fixed. One would have to change the value manually as the size of the volume changes. And that’s important so that you maintain the IOPS per terabyte. Now with adaptive QoS, it automatically scales the policy group values according to the volume size, thereby maintaining the ratio of IOPS to terabyte as the volume size changes.

First AQoS Deployment

Faisal: Here’s a look at our first Adaptive QoS deployment. Like I mentioned, the policy groups that were available before ONTAP 9.3, those were static. And so we had to use some methodology to constantly scan and adjust the thresholds, because post 9.3, as we’ll see in an upcoming slide, we had the Adaptive QoS policies that were dynamic and ONTAP would adjust the threshold automatically.

So, with our first AQoS deployment, we used an internally developed tool. I’m not sure if most of you are, or any of you would have heard of the tool called Kitchen Police, and it was only made available to a limited audience, including us, Customer-1. The tool was never officially supported and the way it used to run is, it would run as a demon on a Linux host and it would connect to the ONTAP clusters using the ONTAP API. There was a config file where we had to define the various thresholds according to the service tiers that we saw earlier. So once it’s running, it runs every hour, it scans all the clusters every hour, looks at all the volumes, looks at the volume sizes and adjusts the thresholds that’s assigned to each volume. And that’s how it would maintain the IOPS per terabyte rating. And hence we deployed AQoS and that was our first Adaptive QoS deployment, although we were using the static QoS policy groups. 

Now, even though it worked great for the most part, we’d been using it for several years, it had its share of issues as well, especially as we upgraded ONTAP to the latest versions.

Challenges

Faisal: So here we see some of the challenges, as I mentioned, it was not officially supported and the script is not being developed anymore. And we saw that with any new versions of ONTAP, it would break almost every week and we had the operations team complaining and we had to go in and put some kind of exceptions excluding volumes. For example, FlexGroups, it had issues with FlexGroups and it would not run, and I had to exclude entire clusters from the Kitchen Police config. Any questions about the first deployment of Adaptive QoS? 

Eduardo: Maybe it’s a good time to talk about (at this point) that as we made the change, as you go ahead and identify that we had an issue with managing performance, managing capacity tied to performance, I know that at the time that we did this, you used to be in the operations team and you were getting all the tickets and all the issues with regards to everything that happens, including performance problems. So can you speak a little bit to, what are the changes that you saw operationally when this happened? Do you see any improvements? How do you see the behavior of systems and our interaction with applications change? 

Faisal: Right. So initially before we did any of the QoS deployments, we would have support tickets open almost every week. We’d have application teams and database teams complaining, and we’d figured out ultimately that most of the times it would be due to a bully workload. And most of the efforts would be in identifying, it may be a rogue script that some DBA kicked off. So most of our time would be spent in identifying who kicked off that script, and until he stopped it, performance would not return to normal levels. But with QoS, that’s a thing of the past, operations doesn’t get any performance issues anymore. So it’s been a sea change from the operations side and the number of tickets performance-related then and now.

Eduardo: And just to clarify, obviously, I know the answer here. I want to talk about this because I think it’s really key to why we’re doing this. The difference here, so in the past, when you have a runaway workload, that workload may take down the entire HA pair or the one side of the HA pair. It depends on what is doing. And so it can run away with all our resources because there’s no limit, that’s the way that it’s always been. But what has changed now is whenever that happens, because it still can happen, but it can’t really take away the resources. The only workload it hurts is itself. Because at that point, if it goes beyond what we have assigned from a QoS perspective, the system just can slow it down. And it slows it down by providing high latency to the workload. And the workload, that person or the application may complain about that, but it’s a self-inflicted problem.

And it’s a much easier conversation to deal with, it’s not a ticket to operations where like the world’s falling apart. It’s more of like, “Hey, I noticed my application is running slow.” And then we talk about, well like, what did you change or I can see that all of a sudden you did much more work than you used to, and many of the cases ends up being an unintended script or something that they run that they didn’t foresee to have the effect that it did. I think it has changed dramatically how much we really worry about performance. And we still worry about it for different reasons, but certainly we can worry about the bigger picture these days. We’re worrying about more, performance as a whole from the controller and cluster perspective and less though from the individual volume and where we already have SLA or some of that we’re trying to meet for our volume. So anyways, I just wanted to make a point with that as we talk about implementation or at least the first implementation. 

Have you embraced automation?

Faisal: All right. Okay, so we’re going to talk about our new implementation using Ansible. Do any of you have any experiences to share or any of the customers using Ansible? Maybe not just for QoS, it could be for anything. Anything to share about Ansible or automation? 

Elias: I’ve spoken to some partners and some of the customers that use Ansible currently, although it’s not something that if you don’t investigate you find out straight away. 

Faisal: Is that on the storage side, the modules for ONTAP, or is it for Unix and other stuff?

Elias: Yes it is the ONTAP modules. I’ve also spoken to some of the SEs in the UK, the NetApp SEs, and they have encountered customers using Ansible and some of them are not aware of our ONTAP modules. So it’s a good idea that you’re presenting this today.

Eduardo: Maybe you want to talk a little bit about what Ansible is, since it seems unknown to some. 

​Why Ansible?

Faisal: Ansible basically is an automation tool and it is, when you compare it with other tools, maybe complex tools like Python or Perl, it’s really easy to set up. And that’s one of the reasons why we’ve embraced it because on our team maybe not everyone is a coding expert, but as soon as the modules for Ansible were available, it was a game changer for us. All of us started writing maybe small playbooks, it could be for creating volumes, changing some thresholds on an aggregate, setting up SnapMirror. It could be simple things. And then we went on to write bigger playbooks. I know we did one webinar, I think sometime last year on how we used it for our Day-0 and Day-1 deployments.

And now we’re doing QoS and there’s so many other efforts in the pipeline, simply because it’s very easy to get going with Ansible. At the same time, it’s very powerful and it can help us to do some very complex workflows. So in our Day-0 deployment, we saw that we can have the playbook connect to the network switches, complete the configuration on the cluster switches and then it can also talk to the cluster, complete the Day-0 provisioning, and then it can also complete the Day-1 provisioning. So it saves a ton of time for the operations and the engineering teams from our experience here. Victor, did you want to add any points here about Ansible? 

Victor: The only thing I want to say is that most of the storage modules are already written that can help you do whatever automation that you want to do. It’s very, very easy to use. And also to install Ansible on your Linux machine, it just takes only about five minutes.

Also, NetApp has a good platform where even if you’re a beginner, you can go and collect them and look at what a beginner needs to start using Ansible. And also they have their platform where there are people there, almost 24 hours a day to answer your questions. So it’s very, very easy for anyone to start using Ansible. 

Eduardo: At the base, Ansible is essentially an automation engine, that allows you to automate anything that can be automated, anything on the infrastructure side, applications side, etc. It is a Red Hat tool today, it is something that has been out for a while now. Probably if you talk to any Unix, Linux administrator, I would imagine that they’re using Ansible or something like Ansible. Other things like Ansible would be Puppet or Chef. Those again, they’re configuration tools, at the end what you do with these tools, like Ansible is that you create a very simple, what they call a playbook, which are mostly like a glorified configuration file. So it’s very simple to code too. And when you create this playbook, display will translate your recipe if you will. 

In fact, I think Chef uses the term recipe, but it takes your recipe that you created on the playbook and makes it so on whatever system it is. The magic happens here when a company like NetApp creates what they call an Ansible module. And the module is a piece of software that is developed in Python usually that can interpret whatever you’re creating on that configuration file, the playbook. They interpret that and compare that to a series of commands on the system itself. And the reason why Ansible is so popular is because on the end user’s side, the creation of the playbook is very simple and it’s very universal. So when we’re talking about Ansible automation, we’re talking about in the context of ONTAP, because that’s what we’re doing, but other things that we’re doing for ONTAP, our other teams around are doing it for instantiating VMs, creating a record from DNS, creating a network configuration, et cetera. 

So it becomes a very powerful tool to automate the infrastructure in general. It also becomes more powerful when you combine with things like Tower, which is a different topic, but Towers is essentially the Red Hat paid for sort of like Ansible Management Center, which allows you to schedule things and create dependencies and things like that. So it’s a complete ecosystem of management. I think I see some questions there, Ansible compared with 7 Mode. I don’t think there’s any 7 Mode modules.

Victor: No, there’s no 7 Mode module. Yeah, it has to be closed ONTAP, yeah. 

Eduardo: All right. But let me just say something about that. So we NetApp, the company, we released ONTAP modules for Ansible that are specific to CDOT. Now if you are a Python coder, you can create your own module for whatever you want. It can be automating your cell phone with Ansible if you wanted to. So I don’t encourage you spending any time with 7 Mode, but if you really needed 7 Mode automation for Ansible, arguably, you can create a module that does that. 

There’s a question saying Ansible will replace WFA in the near future. Yeah, I hear that a lot. I’ll tell you my view of the world. They’re different tools, but I think Ansible certainly replaces the functionality WFA provides today to some degree. And the reason I say that is this, WFA does a lot, a lot more than Ansible does. WFA is an orchestrator that allows you to create workflows and integrate execution of code in different systems and organize them in a flow and have decision points, all that kind of stuff. All that engine that WFA is, Ansible is not. And Ansible is much more simple in nature. Ansible automates whatever you tell it to do.

Now you can certainly create a series of modules, a series of playbooks in Ansible that are executed in order that accomplish a workflow. And if you include things like Tower, then Tower also organizes that work before you. So then that combination of playbooks and organization will certainly be able to replace the functionality that something that WFA delivers. From our perspective, in our internal storage team, we actually started working on WFA some time ago. We didn’t really get that far because WFA at the end of the day was a huge barrier to enter this, so there’s a big learning curve to WFA. And we do use it in a different team, the tools team, and they are really more like programmers. And they, for them it’s a lot easier to pick up something like WFA.

For us, that we’re not programmers, to be very transparent, we tried WFA and we just gave up. There was too much overhead. Now Ansible comes to the picture of making more power, more available from the get go. And that’s really our focus today when it comes down to automation. So we’re really finally focused on automation around that. I’m not a product manager for WFA, but I can see how the Ansible versatility and easy to use and power really will overtake whatever you can do with WFA.

Victor: Also, Eduardo, there are some questions on the use cases of Ansible. So like Faisal said, we are going to demonstrate one of the use cases we’re using it for the Adaptive QoS. Also, we’re also using it to enforce our default snapshot policies, we’re also using it to maintain, clean-out all the stale SnapMirror relationships that we have in our environment. And also we are working on using it to configure storage efficiencies on our volume. Also, we are using it to deploy V Server, go to the DNS, get an IP address and use that particular IP address to configure the instances on the V Server. Also, we’re using it to enforce some volume configuration parameters. So there are so many things that we are using it right now for.

Current deployment with Ansible

Faisal: Okay, thanks Victor. All right. So until ONTAP 9.3, we only had the static QoS policy groups, but with 9.3, we also have the Adaptive QoS policy groups that are available. Now, that means ONTAP can now automatically scale performance with storage. So as the volume size grows, ONTAP automatically updates the throughput ceiling or floor. And it does this by scanning the volumes every five minutes. 

So as this happens, it maintains the IO Density, the IOPS per terabyte. And so the advantage is, now all you need to do is once you assign the Adaptive QoS policies to the volumes, then ONTAP completely takes over. And as the volume grows, it’s going to automatically adjust the thresholds, and that’s really an advantage when you’re managing tens of thousands of volumes. So in our case, we also use Ansible and we are going to see in the next slide the workflow from start to finish of how we’re using Ansible to deploy AQoS.

So, like Victor mentioned, all the Ansible playbooks are today hosted on a VM. And for reference, here’s the URL for all the Ansible modules for ONTAP, you also have it for E-Series. On the right side as you can see, these are the same service tiers that we spoke about earlier and we’ve defined all the four service tiers, extreme, high write, performance and value. 

​AQoS Deployment Workflow

Faisal: Here’s the deployment workflow. Start is where we’re gathering information, like what the throughput threshold should be. You remember we spoke about the IO Density reports that we generated, so that kind of gives us an understanding of what the different thresholds should be. And that’s the point at which we also determine if there’s any volume types that should not be included in the scans, if there are any exceptions.

So our first playbook, it goes in and deletes any of the old QoS policies, and that’s because you can only have one, either QoS or the AQoS policies. You cannot have both assigned to a volume. And then the second playbook, it creates the Adaptive QoS policy groups on all the SVMs. Then the third one, that’s the one where we had to put in most of the logic because that one assigns the Adaptive QoS policy groups on the volumes, and it had to look at what SVM the volume lives in and also on what aggregate it’s hosted on. Because remember, we defined the service tiers per aggregate. Now with that, most of the work is done, however, this does not cover any new volumes that are created or the situation where we would move volumes between aggregates. So we had to write another playbook that would look at any new volumes that are created or any volumes that were moved. So those playbooks are running every hour, and we will see that in the demo. 

Demo

Victor: So, like Faisal was saying, so these are all the modules that NetApp has written. The one that starts with the CDOT, they are old ones, they’re depreciated now. So the new ones, starts with NetApp ONTAP. Those are the ones that you can use for all the cluster mode. So those are all the modules that NetApp has developed, and also like I said, each of them, let me click on one. Yeah, it provides you all the parameters that you need for you to be able to use each particular module, and also it shows you examples of what you can do. So this one is used when you want to configure NTP server for your filers, so you can use this one to make sure that all the clusters you have in your environment are pointed to same NTP server. Also, if you want to remove them, all you need to change this parameter from present to absent and you run it and it will go and delete all of them. So they are very, very, all the modules are very, very easy to use.

For the QoS, like the flow chart that files I show, the first thing we do is to go and delete all the older QoS policy that we have on the filer. And this is the module that we use to do that. So all it does is, it goes get all the volumes for them at an array and we loop through those arrays to remove them and set them to non. So that’s the first one. So this is the first playbook that we run to remove all the old QoS policy that we have on our environment. Then after that, I’m going to demo some of data that we own another one that creates, so we run another one that creates all the QoS policies based on the SVM that we have in our environment. So these are where we define all the service levels. And what I do also is go to the file, I get all the SVMs and then I loop through the SVMs, I create all the policies that is defined here.

Now, the next one is the one that assigns the policy. And the good thing about Ansible is that you can encrypt your code. You can encrypt your passwords. Like this one, I encrypted this code, when I do cast it’s not going to show anybody what I’ve written, unless you know the password that I use in encrypting it. 

So to encrypt it, you use what they call Ansible Vault. If I want to open it, I have to still use Ansible Vault, then I say it did, then it will prompt me for the password of my Vault. Then as long once I put the password of the Vault, it’s going to display the contents of my playbook. Also, if you look at here, because it will need to authenticate the playbook, so because of the all my password, I store them in a file and I encrypt that file as well so that my password is not displayed as a clear text. So my password is in this file. The password you uses to log into the filer, is stored in this file and this file is also encrypted.

So here, so what it does also here is to go and get all the QoS policy that I have on every particular filer. So once it gets the QoS policy, then it gets all the volumes also that I have on their filer. Then I did what we call a nested loop. And it looks them, and it checks, “Okay, which V server does this volume belong to and which aggregate does this volume belong to?” And it will match them with the right QoS policy that is supposed to have. So it does virtually what this thing does. So we run this one at the first time on the filer and it will assign all the volumes the necessary QoS policy that that volume needs. But we have some other ones that we run every hour, those ones are scheduled as a Cron Job. So all that one does is to check for new volumes that are created or if the volume has been moved from one aggregate to another, and it will assign the right QoS policy to that particular volume based on the new volume that that that particular volume has been moved to.

So these are actually the Cron Job that I use to run them. So I’m going run them here. So ensure you know all the different filers that is going to go to, to see if there’s the new volumes that are created in those … 

Okay, so is done. So when we run it, yeah, these are the different filers. So here you can see all the new volumes that it discovered. So here, this one, there was no volume that it discovered. These ones, there are no new volume they discovered. But if you look at this one, these are all the volumes that it discovered and they assign the, these are the aggregate that the volume is on. So here, this is the aggregate, this is the volume. This is the volume, this is the aggregate that those volumes are on. Then after that, this is where it begins. So anyone that has yellow means that it changed, like this one. So you can see if there’s change.

So this one it changed the aggregate and assigns the right QoS policy. So here, if you look at this volume, is on this aggregate, extreme, and also, if you look at the V server, this is the V server for the volume, this is the volume, this is the actual volume. And here, this is the aggregate for the volume. Here, he assigns the QoS for that particular volume because our QoS is named after the, the QoS policy is named after the V Server and the type of QoS policy, EX-CR, so meaning that is extreme. And if you look at our aggregate, these are our aggregate. It has extreme in it. So we assign this policy to this volume based on the name of the V server and on the type of policy that we need it to assign. So anyone that is, yellow color means that that one, that particular one was changed, was a new volume that was created and they didn’t have any policy, it will assign the new policy to that particular volume.

We have another also, the one that checks whether a volume has been moved. This one also, let’s see, you also goes through, Oh sorry, I missed it. Yeah, so I’m here, there’s no volume that was moved, so it did not find anything. So another thing about Ansible, Ansible is also about having what they call an inventory file. So if I go here, inventory file is where you put all the devices that you want the playbook to run. So here, so this is my inventory book and this is all our production, most of our production clusters. Then also we have another one here. Yeah, these are all our subprod clusters. So when you run the command, you have to reference, if you see here, I reference the inventory that is supposed to use. I referenced the inventory, this is the one that has all our subprod. And also I referenced the production one, so it will go read all the clusters and it’ll run the playbook. It will log into the clusters and perform what is in the playbook on those clusters.

So if you’re using Ansible, the first thing you have to understand, is understand how to create your inventory. The inventory, you can group your filer, you can group your storage based on production, subprod, you can group them based on application, depending on, you can group them based on data. Maybe there’s a particular filer you want to, you guys use specifically for database, so you can group them on different tiers. And when you write your playbook, you can specify which particular group that you want the playbook to act on. 

Faisal: Each volume can only have one service level, I mean, it can either be value or extreme. For example, say you’re an application owner and you’ve requested a volume with say a max of 512 IOPS per terabyte, so I’m going to place you in the value tier and tomorrow you have more application workloads and they’re going to consume more IOPS, then I could move you into an extreme tier. And the way I would do that is just a volume move into an extreme aggregate. And my Ansible playbook that runs every hour, that’s going to detect that, “Okay, this volume has moved from value to extreme,” and it’s going change the AQoS policy that’s assigned to the volume and thereby you get more IOPS on the extreme tier.

Victor: Then also you can mask your username and password by using Ansible Vault. Like I showed in my playbook, let me open the playbook again. So look at here, if you look at my password is in a file called the password … Yeah, let me try to open my … Yeah, see, is encrypted. Okay. So except you know the password to decrypt this, you will not be able to know my password. So my password file is encrypted and on my playbook I reference my password file here. So that’s how you can mask your password and username from Ansible playbook by using what they call the Ansible Vault.

Thank you for reading. Click here for the recording of this webinar.