ivangon

Host Aggregates and Availability Zones in Metapod

Blog Post created by ivangon on Aug 26, 2015

OpenStack has flexible partitioning capabilities that allow cloud administrators to subdivide their environments into logical groups that can denote commonalities about the resources belonging to that group. For example, one group can be for compute nodes that share a common configuration, like fast disks. Another group can be for compute nodes that share a common power source and therefore, are part of the same fault domain. The two partitioning capabilities I’ll be discussing are Host Aggregates and Availability Zones. Administrators can optionally configure one, or both simultaneously; it’s purely up to them and what choices they want to offer their users. There’s often confusion about these two features, and in this post, I hope to provide clarity on when and how they can be used.

 

Both of these mechanisms are ultimately used to influence “where” (on which compute nodes), new instances (VMs) are launched. Cloud administrators define the actual Host Aggregate and Availability Zone “groupings” and then in turn expose those groups to the cloud users. Cloud users can select the groups to use based where they want their instances launched.

 

Some have said to me: “I thought the cloud was supposed to take care of the placement of all instances. Cloud was supposed to make it so the users don’t need to care about placement.”

 

It’s true to a certain degree, but users typically want this type of additional flexibility and choice because it helps them in architecting resiliency, efficiency, and performance into their applications. In order to effectively do it, they need to have some basic knowledge of the layout of the cloud. For example, they need to be able to select the appropriate bucket of compute nodes with the right characteristics to run their workload, without having to be concerned about selecting a specific compute node. That would be too granular and more difficult to manage. They need a framework that gives them the right amount of choice and control. Used correctly, Host Aggregates and Availability Zones can give users the exact amount of information they need and the means for coarse-grained placement.

 

Whenever possible, consideration should be given to these features in the planning phase of your OpenStack deployment, but there’s no restriction in enabling them after the fact.

 

The first feature we’ll discuss is Host Aggregates. Typically, Host Aggregates allow a cloud administrator to define groups of compute nodes based on their hardware configuration. This could provide users the choice of launching instances on specific hardware, like security-optimized compute nodes, memory-optimized compute nodes, or storage-optimized compute nodes. (This is only one example. Keep in mind that OpenStack provides flexibility for cloud administrators to define groupings any way they like.)

 

Unlike Availability Zones (which we’ll talk about later), Host Aggregate groups are not directly selectable by cloud users, because they are not directly seen, neither on the dashboard, nor the API. Host Aggregates are obscured since they are actually associated to an instance flavor definition. Metadata is assigned to a flavor via extra_specs definitions, thus confining which group (Host Aggregate) of compute nodes the flavor (and subsequently, instances) can run on. Recall that cloud users do not have the authority to create, modify, or delete flavors, so flavor manipulation is reserved for administrators only. Cloud users can view the extra_specs associated with a flavor by using the CLI command “nova flavor-show <flavor>”.

 

Host Aggregates Example

Let’s say I have a single OpenStack cluster with one availability zone defined (one AZ because the compute nodes are all in a single rack). There are seven compute nodes in the cluster, but three different hardware classes. Two of compute nodes are configured as security-optimized servers, two are memory-optimized servers, and the remaining three compute nodes are storage-optimized servers (with fast SSDs).

 

I want to offer these seven compute nodes to my cloud users in a way that differentiates them based on compute node hardware class using Host Aggregates.

 

Here are the high level steps in creating Host Aggregates for your environment:

 

  1. Create the Host Aggregate name and details
    1. Do not specify an AZ (leave blank)
    2. Create and assign a key value pair to be used
  2. Associate the appropriate compute nodes to this Host Aggregate
  3. Create a new flavor definition that contains the appropriate key value pair in the extra specs
  4. Do this for each class of server you are creating

 

Here are seven compute nodes in a single Availability Zone:

 

ivangon1.jpg

 

mhv1 and mhv2 are the security-optimized compute nodes, mhv3 and mhv4 are the memory-optimized compute nodes, and finally mhv5, mhv6, and mhv7 are storage-optimized compute nodes. We’ll create three Host Aggregates. Here is the desired Host Aggregate layout:

 

ivangon2.jpg

 

Get a listing of the compute node names, then create the host aggregates using the “nova aggregate-create,” “nova aggregate-set-metadata,” and “nova aggregate-add-host” commands shown below:

 

ivangon3.jpg

 

Host Aggregate “AGG1” has been successfully created.

 

I will continue with the creation of two additional Host Aggregates. “AGG2” and “AGG3”:

 

ivangon4.jpg

 

“AGG3”:

 

ivangon5.jpg

 

Here is the view from the Horizon dashboard. You can see the Aggregate Name, Metadata, and Hosts:

 

ivangon6.jpg

 

The next action is to associate these three Host Aggregates with flavors. Here we create the three new flavors (m1.small.sec, m1.small.mem, m1.small.ssd), then associate the appropriate “extra_spec” key/value pair to each (of course OpenStack is flexible enough to support as many flavors as I can imagine):

 

esta.jpg

Now that I have the flavors defined, I can launch instances to validate the behavior and desired effect. Notice how the users don’t see the Host Aggregates per se, they only see the flavors. This is just the right of information that’s needed:

ivangon7.jpg

 

I will launch ten new instances using the “.sec” flavor. We can see that all of the instances have been constrained to only launch on mhv1 and mhv2 - exactly as configured. No other compute node is running any of my “.sec” flavors as confirmed by the output of the nova list command:

 

ivangon8.jpgivangon9.jpg

 

We can confirm the behavior of the other two flavor types. Again, I boot 10 new instances with the specific “.mem” flavor type. And validate where they are launched:

 

ivangon10.jpg

 

The 10 “.mem” instances are constrained to launching on mhv3 and mhv4.

 

ivangon14.jpgivangon15.jpg

 

Availability Zone Example

In the first scenario, we only had one Availability Zone defined. In the next example, we’ll build upon the Host Aggregates configuration and add new Availability Zone definitions. As was demonstrated by the previous example, Host Aggregates were used to give the users a choice of which type of compute node to deploy their instances to. In this scenario, we’ll give the cloud users an additional level of control by giving them the ability to select which Availability Zone to deploy to as well. This scenario creates two new AZs to replace the single AZ we used in the first example. Here we’ll use the names ZoneA and ZoneB. ZoneA and ZoneB define different fault domains, each with perhaps different power and network connectivity. This is an example of how you can make cloud users aware of these different domains and thus give them the ability to split their workload evenly across them for redundancy purposes.

 

Here are the high level steps in creating Availability Zones for your environment:

 

  1. Create the Host Aggregate name and details
    1. Specify an AZ (Use the Host Aggregate name as the Availability Zone name as well)
  2. Associate the appropriate compute nodes to this Availability Zone
  3. Do this for each Zone you are creating

 

We have now split the single AZ in the previous diagram into two separate AZs:

 

ivangon16.jpg

 

We’ll create the two new Availability Zones using “nova aggregate-create <name> <availability zone>” command. The creation is the same process as creating a Host Aggregate, however, you specify the Aggregate name AND the new Availability Zone name as arguments. Here we will create Availability Zone named “ZoneA.”

 

ivangon17.jpg

 

Next we’ll associate compute nodes to ZoneA, again, just as we did for Host Aggregates:

 

ivangon18.jpg

 

Now we’ll create ZoneB:

 

ivangon19.jpg

 

Now we have our two AZs defined:

 

ivangon20.jpg

 

One thing you’ll notice that’s different about creating Availability Zones is that you cannot associate a compute node to more than one Availability Zone. Here I tried to add mhv7 to ZoneB, but mhv7 already belongs to ZoneA:

 

ivangon21.jpg

 

Here we can see the Availability Zone definitions on the dashboard:

 

ivangon22.jpg

 

Next, we’ll launch .mem flavored instances using the --availability-zone argument in the nova boot command to boot the instances to “ZoneA”:

ivangon23.jpg

 

We can see that the instances are constrained to the memory-optimized compute nodes just like our Host Aggregate example before, but in this case, they have also been further constrained to launch on only the memory-optimized nodes in ZoneA (mhv3):

ivangon24.jpgivangon25.jpg

 

We’ll again launch .mem flavored instances, but this time to launch the instances on memory-optimized compute nodes in “ZoneB”:

 

ivangon26.jpg

 

We can see that the instances are still constrained to the memory-optimized compute nodes, but now only launch on the memory-optimized nodes in ZoneB (mhv4):

 

ivangon27.jpg

 

The last test is to omit the availability zone argument from the command, and we’ll see that the instances are load balanced on memory-optimized compute nodes across ZoneA AND ZoneB:

 

ivangon28.jpg

 

We see that the instances are balanced across mhv3 and mhv4:

 

 

ivangon29.jpg

 

You can see that I’ve used a combination of Host Aggregates (via flavor definitions) plus Availability Zones to launch my instances exactly where and on which class of hardware I want them. We have just shown examples of how and when these features can be used to group resources to provide flexibility and choice for cloud users.

Outcomes