OpenStack has flexible partitioning capabilities that allow cloud administrators to subdivide their environments into logical groups that can denote commonalities about the resources belonging to that group. For example, one group can be for compute nodes that share a common configuration, like fast disks. Another group can be for compute nodes that share a common power source and therefore, are part of the same fault domain. The two partitioning capabilities I’ll be discussing are Host Aggregates and Availability Zones. Administrators can optionally configure one, or both simultaneously; it’s purely up to them and what choices they want to offer their users. There’s often confusion about these two features, and in this post, I hope to provide clarity on when and how they can be used.
Both of these mechanisms are ultimately used to influence “where” (on which compute nodes), new instances (VMs) are launched. Cloud administrators define the actual Host Aggregate and Availability Zone “groupings” and then in turn expose those groups to the cloud users. Cloud users can select the groups to use based where they want their instances launched.
Some have said to me: “I thought the cloud was supposed to take care of the placement of all instances. Cloud was supposed to make it so the users don’t need to care about placement.”
It’s true to a certain degree, but users typically want this type of additional flexibility and choice because it helps them in architecting resiliency, efficiency, and performance into their applications. In order to effectively do it, they need to have some basic knowledge of the layout of the cloud. For example, they need to be able to select the appropriate bucket of compute nodes with the right characteristics to run their workload, without having to be concerned about selecting a specific compute node. That would be too granular and more difficult to manage. They need a framework that gives them the right amount of choice and control. Used correctly, Host Aggregates and Availability Zones can give users the exact amount of information they need and the means for coarse-grained placement.
Whenever possible, consideration should be given to these features in the planning phase of your OpenStack deployment, but there’s no restriction in enabling them after the fact.
The first feature we’ll discuss is Host Aggregates. Typically, Host Aggregates allow a cloud administrator to define groups of compute nodes based on their hardware configuration. This could provide users the choice of launching instances on specific hardware, like security-optimized compute nodes, memory-optimized compute nodes, or storage-optimized compute nodes. (This is only one example. Keep in mind that OpenStack provides flexibility for cloud administrators to define groupings any way they like.)
Unlike Availability Zones (which we’ll talk about later), Host Aggregate groups are not directly selectable by cloud users, because they are not directly seen, neither on the dashboard, nor the API. Host Aggregates are obscured since they are actually associated to an instance flavor definition. Metadata is assigned to a flavor via extra_specs definitions, thus confining which group (Host Aggregate) of compute nodes the flavor (and subsequently, instances) can run on. Recall that cloud users do not have the authority to create, modify, or delete flavors, so flavor manipulation is reserved for administrators only. Cloud users can view the extra_specs associated with a flavor by using the CLI command “nova flavor-show <flavor>”.
Host Aggregates Example
Let’s say I have a single OpenStack cluster with one availability zone defined (one AZ because the compute nodes are all in a single rack). There are seven compute nodes in the cluster, but three different hardware classes. Two of compute nodes are configured as security-optimized servers, two are memory-optimized servers, and the remaining three compute nodes are storage-optimized servers (with fast SSDs).
I want to offer these seven compute nodes to my cloud users in a way that differentiates them based on compute node hardware class using Host Aggregates.
Here are the high level steps in creating Host Aggregates for your environment:
- Create the Host Aggregate name and details
- Do not specify an AZ (leave blank)
- Create and assign a key value pair to be used
- Associate the appropriate compute nodes to this Host Aggregate
- Create a new flavor definition that contains the appropriate key value pair in the extra specs
- Do this for each class of server you are creating
Here are seven compute nodes in a single Availability Zone:
mhv1 and mhv2 are the security-optimized compute nodes, mhv3 and mhv4 are the memory-optimized compute nodes, and finally mhv5, mhv6, and mhv7 are storage-optimized compute nodes. We’ll create three Host Aggregates. Here is the desired Host Aggregate layout:
Get a listing of the compute node names, then create the host aggregates using the “nova aggregate-create,” “nova aggregate-set-metadata,” and “nova aggregate-add-host” commands shown below:
Host Aggregate “AGG1” has been successfully created.
I will continue with the creation of two additional Host Aggregates. “AGG2” and “AGG3”:
Here is the view from the Horizon dashboard. You can see the Aggregate Name, Metadata, and Hosts:
The next action is to associate these three Host Aggregates with flavors. Here we create the three new flavors (m1.small.sec, m1.small.mem, m1.small.ssd), then associate the appropriate “extra_spec” key/value pair to each (of course OpenStack is flexible enough to support as many flavors as I can imagine):
Now that I have the flavors defined, I can launch instances to validate the behavior and desired effect. Notice how the users don’t see the Host Aggregates per se, they only see the flavors. This is just the right of information that’s needed:
I will launch ten new instances using the “.sec” flavor. We can see that all of the instances have been constrained to only launch on mhv1 and mhv2 - exactly as configured. No other compute node is running any of my “.sec” flavors as confirmed by the output of the nova list command:
We can confirm the behavior of the other two flavor types. Again, I boot 10 new instances with the specific “.mem” flavor type. And validate where they are launched:
The 10 “.mem” instances are constrained to launching on mhv3 and mhv4.
Availability Zone Example
In the first scenario, we only had one Availability Zone defined. In the next example, we’ll build upon the Host Aggregates configuration and add new Availability Zone definitions. As was demonstrated by the previous example, Host Aggregates were used to give the users a choice of which type of compute node to deploy their instances to. In this scenario, we’ll give the cloud users an additional level of control by giving them the ability to select which Availability Zone to deploy to as well. This scenario creates two new AZs to replace the single AZ we used in the first example. Here we’ll use the names ZoneA and ZoneB. ZoneA and ZoneB define different fault domains, each with perhaps different power and network connectivity. This is an example of how you can make cloud users aware of these different domains and thus give them the ability to split their workload evenly across them for redundancy purposes.
Here are the high level steps in creating Availability Zones for your environment:
- Create the Host Aggregate name and details
- Specify an AZ (Use the Host Aggregate name as the Availability Zone name as well)
- Associate the appropriate compute nodes to this Availability Zone
- Do this for each Zone you are creating
We have now split the single AZ in the previous diagram into two separate AZs:
We’ll create the two new Availability Zones using “nova aggregate-create <name> <availability zone>” command. The creation is the same process as creating a Host Aggregate, however, you specify the Aggregate name AND the new Availability Zone name as arguments. Here we will create Availability Zone named “ZoneA.”
Next we’ll associate compute nodes to ZoneA, again, just as we did for Host Aggregates:
Now we’ll create ZoneB:
Now we have our two AZs defined:
One thing you’ll notice that’s different about creating Availability Zones is that you cannot associate a compute node to more than one Availability Zone. Here I tried to add mhv7 to ZoneB, but mhv7 already belongs to ZoneA:
Here we can see the Availability Zone definitions on the dashboard:
Next, we’ll launch .mem flavored instances using the --availability-zone argument in the nova boot command to boot the instances to “ZoneA”:
We can see that the instances are constrained to the memory-optimized compute nodes just like our Host Aggregate example before, but in this case, they have also been further constrained to launch on only the memory-optimized nodes in ZoneA (mhv3):
We’ll again launch .mem flavored instances, but this time to launch the instances on memory-optimized compute nodes in “ZoneB”:
We can see that the instances are still constrained to the memory-optimized compute nodes, but now only launch on the memory-optimized nodes in ZoneB (mhv4):
The last test is to omit the availability zone argument from the command, and we’ll see that the instances are load balanced on memory-optimized compute nodes across ZoneA AND ZoneB:
We see that the instances are balanced across mhv3 and mhv4:
You can see that I’ve used a combination of Host Aggregates (via flavor definitions) plus Availability Zones to launch my instances exactly where and on which class of hardware I want them. We have just shown examples of how and when these features can be used to group resources to provide flexibility and choice for cloud users.