Skip navigation
All Places > Technology > Cisco Data Center Community > Compute & Storage > Unified Computing System (UCS) > Blog

Executive Summary

VMware NSX brings industry-leading network virtualization capabilities to Cisco UCS and Cisco Nexus infrastructures, on any hypervisor, for any application, with any cloud management platform.  Adding state of the art virtual networking (VMware NSX) to best-in-class physical networking (Cisco UCS & Nexus) produces significant optimizations in these key areas:

  • Provision services-rich virtual networks in seconds
  • Orders of magnitude more scalability for virtualization
  • The most efficient application traffic forwarding possible
  • Orders of magnitude more firewall performance
  • Sophisticated application-centric security policy
  • More intelligent automation for network services
  • Best-of-breed synergies for multi data center
  • More simplified network configurations

Cisco UCS and Nexus 7000 infrastructure awesomeness

A well-engineered physical network always has been and will continue to be a very important part of the infrastructure. The Cisco Unified Computing System (UCS) is an innovative architecture that simplifies and automates the deployment of stateless servers on a converged 10GE network. Cisco UCS Manager simultaneously deploys both the server and its connection to the network through service profiles and templates; changing what was once many manual touch points across disparate platforms into one automated provisioning system. That’s why it works so well. I’m not just saying this; I’m speaking from experience.Cisco UCS is commonly integrated with the Cisco Nexus 7000 series; a high-performance modular data center switch platform with many features highly relevant to virtualization, such as converged networking (FCoE), data center interconnect (OTV), Layer 2 fabrics (FabricPath, vPC), and location independent routing with LISP. This typically represents best-in-class data center physical networking.With Cisco UCS and Nexus 7000 platforms laying the foundation for convergence and automation in the physical infrastructure, the focus now turns to the virtual infrastructure. VMware NSX, when deployed with Cisco UCS and Cisco Nexus, elegantly solves many of the most pressing issues at the intersection of networking and virtualization. VMware NSX represents the state of the art for virtual networking.

1) Virtualization-centric operational model for networking

VMware NSX adds network virtualization capabilities to existing Cisco UCS and Cisco Nexus 7000-based infrastructures, through the abstraction of the virtual network, complete with services such as logical switching, routing, load balancing, security, and more. Virtual networks are deployed programmatically with a similar speed and operational model as the virtual machine — create, start, stop, template, clone, snapshot, introspect, delete, etc. in seconds.The virtual network allows the application architecture (including the virtual network and virtual compute) to be deployed together from policy-based templates, consolidating what was once many manual touch points across disparate platforms into one automated provisioning system. In a nutshell, VMware NSX is to virtual servers and the virtual network what Cisco UCS is to physical servers and the physical network.

2) More headroom for virtualization, by orders of magnitude (P*V)

VMware NSX provides the capability to dynamically provision logical Layer 2 networks for application virtual machines across multiple hypervisor hosts, without any requisite VLAN or IP Multicast configuration in the Cisco UCS and Cisco Nexus 7000 infrastructure. For example, thousands of VXLAN logical Layer 2 networks can be added or removed programmatically through the NSX API, with only a few static infrastructure VLANs; compared to what was once thousands of manually provisioned VLANs across hundreds of switches and interfaces.

Figure: NSX dynamic logical Layer 2 networks

Two of the most common breaking points when scaling a network for virtualization are:

  1. Limited number of STP logical port instances the switch control plane CPUs can support, placing a ceiling on VLAN density.
  2. Limited MAC & IP forwarding table resources available in switch hardware, placing a ceiling on virtual machine density.

VLANs and virtual machines; two things you don’t want a visible ceiling on. Fortunately, VMware NSX provides significant headroom for both, by orders of magnitude, for the simple reason that VLAN and STP instances are dramatically reduced; and hardware forwarding tables are utilized much more efficiently.Consider (P1 * V1) = T. Switch ports * number of active VLANs = STP logical ports.One thousand fewer infrastructure VLANs with VMware NSX translates into one thousand times fewer STP logical port instances loading the Cisco UCS and Nexus 7000 control plane CPUs. This can only help ongoing operational stability, along with the obvious scaling headroom.Consider (P2 * V2) = D. Physical hosts * VMs per host equals virtual machine density.Normally, the size of the MAC & IP forwarding tables in a switch roughly determines the ceiling of total virtual machines you can scale to (D), as each virtual machine requires one or more entries. With VMware NSX, however, virtual machines attached to logical Layer 2 networks do not consume MAC & IP forwarding table entries in the Cisco UCS and Nexus 7000 switch hardware. Only the physical hosts require entries. In other words, with VMware NSX, the ceiling is placed on the multiplier (P2), not the total (D).Reduced VLAN sprawl and logical Layer 2 networks compound to both simplify the Cisco UCS and Nexus configurations and significantly extend the virtualization scalability and virtual life of these platforms.

3) Most efficient application traffic forwarding possible

Have you ever noticed the paradox that good virtualization is bad networking? For example, the network design that works best for virtualization (Layer 2 fabric) isn’t the best design for Layer 3 traffic forwarding, and vice versa. That is, until now.VMware NSX provides distributed logical Layer 3 routing capabilities for the virtual network subnets at the hypervisor kernel. Each hypervisor provides the Layer 3 default gateway, ARP resolver, and first routing hop for its hosted virtual machines.  The result is the most efficient forwarding possible for east-west application traffic on any existing Layer 2 fabric design, most notably Cisco UCS.

Figure: NSX Distributed Layer 3 routing — intra host

In the diagram above, VMware NSX distributed logical routing provides east-west Layer 3 forwarding directly between virtual machines on the same Cisco UCS host, without any hairpin hops to the Cisco Nexus 7000 — the most efficient path possible.VMware NSX spans multiple Cisco UCS hosts acting as one distributed logical router at the edge. Each hypervisor provides high performance routing only for its hosted virtual machines in the kernel I/O path, without impact on system CPU. Layer 3 traffic between virtual machines travels directly from source to destination hosts inside the non-blocking Cisco UCS fabric — the most efficient path possible.

Figure: NSX Distributed Layer 3 routing — inter host

This efficient Layer 3 forwarding works with the existing Cisco UCS Layer 2 fabric, keeping more east-west application traffic within the non-blocking server ports, minimizing traffic on the fewer uplink ports facing the Cisco Nexus 7000 switches.With Layer 3 forwarding for the virtual network handled by the hypervisors on Cisco UCS, the Cisco Nexus 7000 switch configurations are simpler; because VMware NSX distributed routing obviates the need for numerous configurations of virtual machine adjacent Layer 3 VLAN interfaces (SVIs) and their associated HSRP settings.Note: HSRP is no longer necessary with the VMware NSX distributed router, for the simple reason that virtual machines are directly attached to one logical router that hasn’t failed until the last remaining hypervisor has failed.The Cisco Nexus 7000 switches are also made more scalable and robust as the supervisor engine CPUs are no longer burdened with ARP and HSRP state management for numerous VLAN interfaces and virtual machines.  Instead, VMware NSX decouples and distributes this function across the plethora of x86 CPUs at the edge.

4) More awesome firewall, by orders of magnitude (H*B)

Similar to the aforementioned distributed logical routing, VMware NSX for vSphere also includes a powerful distributed stateful firewall in the hypervisor kernel, which is ideal for securing east-west application traffic directly at the virtual machine network interface (inspecting every packet) with scale-out data plane performance. Each hypervisor provides transparent stateful firewall inspection for its hosted virtual machines, in the kernel, as a service – and yet all under centralized control.The theoretical throughput of the VMware NSX distributed firewall is some calculation of (H * B). The number of Hypervisors * network Bandwidth per hypervisor. For example, 500 hypervisors each with two 10G NICs would approximate to a 20 Terabit east-west firewall.

Figure: NSX Distributed Firewall — intra host

As we see in the diagram above, the distributed firewall provides stateful east-west application security directly between virtual machines on the same Cisco UCS host, without any hairpin traffic steering through a traditional firewall choke point. Zero hops. The most efficient path possible.The VMware NSX distributed firewall spans multiple Cisco UCS hosts, like one massive firewall connected directly to every virtual machines. Each hypervisor kernel provides the stateful traffic inspection for its hosted virtual machines. In other words, traffic leaving a Cisco UCS host and hitting the fabric has already been permitted by a stateful firewall, and is therefore free to travel directly to its destination (where it’s inspected again).

Figure: NSX Distributed Firewall — inter host

Given the VMware NSX distributed firewall is directly adjacent to the virtual machines, sophisticated security policies can be created that leverage enormous amount of application-centric metadata present in the virtual compute layer (things such as user identity, application groupings, logical objects, workload characteristics, etc.); far beyond basic IP packet header inspection.As a simple example, a security policy might say that protocol X is permitted from the logical network ”Web” to ”App”  – no matter the IP address.  Consider a scenario where this application is moved to a different data center, with different IP address assignments for “Web” and “App” networks; and having no affect on the application’s security policy.  No need to change or update firewall rules.Finally, we can see again that more east-west application traffic stays within the low latency non-blocking Cisco UCS domain — right where we want it.  This can only help application performance while freeing more ports on the Cisco Nexus 7000 previously needed for bandwidth to a physical firewall.

5) More awesome network services

One of the more pressing challenges in a virtualized data center surrounds efficient network service provisioning (firewall, load balancing) in a multi-tenant environment. Of particular importance are the services establishing the perimeter edge — the demarcation point establishing the application’s point of presence (NAT, VIP, VPN, IP routing). Typical frustrations often include:

  • Limited multi-tenancy contexts on hardware appliances
  • Static service placement
  • Manually provisioned static routing
  • Limited deployment automation
  • Service resiliency

To address this, VMware NSX includes performance optimized multi-service virtual machines (NSX Edge Services), auto deployed with the NSX API into a vSphere HA & DRS edge cluster. Multi-tenancy contexts are virtually unlimited by shifting perimeter services from hardware appliances to NSX Edge virtual machines on Cisco UCS.

Figure: Sample VMware NSX logical topology on Cisco UCS

Dynamic IP routing protocols on the NSX Edge (BGP, OSPF, IS-IS) allow the Cisco Nexus 7000 switches to learn about new (or moved) virtual network IP prefixes automatically — doing away with stale and error prone static routes.VMware NSX Edge instances leverage HA & DRS clustering technology to provide dynamic service placement and perpetual N+1 redundancy (auto re-birth of failed instances); while Cisco UCS stateless computing provides the simplified and expedient restoration of service capacity (re-birth of failed hosts).

Figure: Application traffic flow. Before & After

With VMware NSX, traffic enters the Cisco UCS domain where all required network services for both north-south and east-west flows are applied using high performance servers within the non-blocking converged fabric, resulting in the most efficient application flows possible.Note: VMware NSX is also capable of bridging virtual networks to physical through the NSX Edge, where specific VXLAN segments can be mapped to physical VLANs connecting physical workloads, or extended to other sites.

6) Divide and Conquer multi data center

Solving the multi data center challenge involves tackling a few very different problem areas related to networking. Rarely does one platform have all the tools to solve all of the different problems in the most elegant way. It’s usually best to divide and conquer each problem area with the best tool for the job. In moving an application from one data center to another, the networking challenges generally boil down to three problem areas:

  1. Recreate the application’s network topology and services
  2. Optimize Egress routing
  3. Optimize Ingress routing

In abstracting the virtual network, complete with Logical Layer 2 segments, distributed logical routing, distributed firewall, perimeter firewall, and load balancing, all entirely provisioned by API and software, VMware NSX is the ideal tool for quickly and faithfully recreating the applications network topology and services in another data center. At this point the NSX Edge provides the application a consolidated point of presence for optimized routing solutions to solve against.

Figure: Multi data center with VMware NSX, Cisco OTV and LISP

The next problem area — optimized egress routing — is ideal for a tool like OTV on the Cisco Nexus 7000 series, where the virtual network’s NSX Edge is given a consistent egress gateway network at either data center, with localized egress forwarding. Cisco OTV services are focused on the DMZ VLAN and the NSX Edge, and not burdened with handling every individual network segment, every virtual machine, and every default gateway within the application. With this simplicity the OTV solution becomes more scalable to handle larger sets of applications, and easier to configure and deploy.

With the Cisco Nexus 7000 and OTV keying on the NSX Edge (via VIPs and IP routing) for the application’s point of presence, this serves as in ideal layering point for the next problem area of optimized ingress routing. This challenge is ideal for tools such as BGP routing, or LISP on the Cisco Nexus 7000 switches and LISP capable routers; delivering inbound client traffic immediately and directly to the data center hosting the application.

7) A superior track record of integration and operational tools

It’s hard to think of two technology leaders with a better track record of doing more operationally focused engineering work together than Cisco and VMware. Examples are both recent and plenty; such as the Cisco Nexus 1000V, Cisco UCS VM-FEX, Cisco UCS Plugin for VMware vCenter, the Cisco UCS Plugin for VMware vCenter Orchestrator, and so on.

Operational visibility is all about providing good data and making it easily accessible. A comprehensive API is the basis on which two industry leaders can engineer tools together exchanging data to provide superior operational visibility. Cisco UCS and VMware NSX are two platforms with a rich API engineered at its core (not a bolted on afterthought). When looking at both the track record and capabilities of VMware and Cisco, working together to serve their mutual customer better, we’re excited about what lies ahead.

In closing

VMware NSX represents best-in-class virtual networking, for any hypervisor, any application, any cloud platform, and any physical network.  A well-engineered physical network is, and always will be, an important part of the infrastructure. Network virtualization makes it even better by simplifying the configuration, making it more scalable, enabling rapid deployment of networking services, and providing centralized operational visibility and monitoring into the state of the virtual and physical network.

The point of this post is not so much to help you decide what your data center infrastructure should be, but to show you how adding VMware NSX to Cisco UCS & Nexus will allow you to get much more out of those best-in-class platforms.

For more check VMware Network Virtualization Blog... Cheers!

In this post I will show the deployment of IPv6 using Provider Networks. There is no specific OpenStack release that I am dictating for this setup. I have used this config on Kilo-to-Newton.

OpenStack Provider Networks with VLANs  allows for the use of VLAN trunks from the upstream Data Center access layer/leaf/ToR switches to the Neutron networks within the OpenStack cloud.  In the use case that I am discussing here, I want to use my Data Center aggregation layer switches as my first-hop layer 3 boundary. I have no use for NAT and I have no use for Neutron L3 agents (specific to running a tenant router).

The following diagram shows the topology that I am using. In this example I have a single All-in-One (AIO) OpenStack node. That node is running on a Cisco UCS C-series with a Cisco VIC which has a VPC configuration to the access layer ToR switches. There are VLAN trunks configured between the ToRs and the Data Center aggregation layer switches (only one shown for simplicity). VLAN 22 (2001:db8:cafe:16::/64) is the VLAN that is used in my examples.  The text box in the diagram shows the NIC layout (ethX<>bonds):


If you want to know more about how Managed (M) and Other (O) flags are used with various IPv6 assignment methods, consult RFC5175.

We are going to jump right into configuration:

Assuming you have a running OpenStack deployment and have followed the guidelines for setting up Neutron to support Provider Networks with VLANs (OVS example, Linux Bridge example), all you have to do is create the provider network and subnet using the IPv6 address assignment method you want (SLAAC, Stateless DHCPv6, Stateful DHCPv6).

Create the Neutron Provider Network with VLAN

In the example below, I am indicating that the router is external (aggregation layer switches), the provider network is of the type VLAN and the VLAN (segmentation_id) associated with this network is VLAN 22:

neutron net-create --router:external --provider:physical_network provider --provider:network_type vlan --provider:segmentation_id=22 --shared external-net 

Create the Neutron Subnet using SLAAC

In the example below, I am using SLAAC as the IPv6 address assignment method.  Note: It is very important to indicate the “–allocation-pool” range with provider networks with VLANs because if you don’t then the beginning IPv6 address range will likely cause a DAD (Duplicate Address Detection) failure with IPv6 address already assigned on your upstream VLAN interfaces on the aggregation layer switches.  In this example I am starting the allocation pool range at 5 so that I do not conflict with addresses on my switches (i.e. 1 – 4)

neutron subnet-create external-net --ip-version=6 --ipv6-address-mode=slaac --ipv6-ra-mode=slaac --name=external-subnet-v6 --allocation-pool start=2001:db8:cafe:16::5,end=2001:db8:cafe:16:ffff:ffff:ffff:fffe 2001:db8:cafe:16::/64 

Create the Neutron Subnet using Stateless DHCPv6

In the example below, I am using Stateless DHCPv6 as the IPv6 address assignment method.  With Stateless and Stateful DHCPv6 you have the option to add the “–dns-nameserver” flag (since the O-bit [Other configuration] can be set). In this example I am setting 2001:db8:cafe:a::e as the DNS entry which points to my DNS server referenced in the previous diagram.  Again, it is important to setup the “–allocation-pool” range:

neutron subnet-create external-net --ip-version=6 --ipv6-address-mode=dhcpv6-stateless --ipv6-ra-mode=dhcpv6-stateless --name=external-subnet-v6 --allocation-pool start=2001:db8:cafe:16::5,end=2001:db8:cafe:16:ffff:ffff:ffff:fffe 2001:db8:cafe:16::/64 --dns-nameserver 2001:db8:cafe:a::e 

Create the Neutron Subnet using Stateful DHCPv6

In the example below, I am using Stateful DHCPv6 as the IPv6 address assignment method.  As was the case with Stateless DHCPv6, Stateful DHCPv6 allows  for the option to add the “–dns-nameserver” flag (since the O-bit can be set):

neutron subnet-create external-net --ip-version=6 --ipv6-address-mode=dhcpv6-stateful --ipv6-ra-mode=dhcpv6-stateful --name=external-subnet-v6 --allocation-pool start=2001:db8:cafe:16::5,end=2001:db8:cafe:16:ffff:ffff:ffff:fffe 2001:db8:cafe:16::/64 --dns-nameserver 2001:db8:cafe:a::e 

Example Configuration for the upstream Data Center aggregation layer switch (VLAN interfaces shown):


This example shows VLAN22 with an IPv6 address of 2001:db8:cafe:16::1/64. HSRPv2 is used as the First-Hop Redundancy Protocol.

interface Vlan22 description Provider Network trunked for C7-os-1 ip address ipv6 address 2001:DB8:CAFE:16::1/64 standby version 2 standby 2 ipv6 autoconfig standby 2 timers msec 250 msec 750 standby 2 priority 110 standby 2 preempt standby 2 authentication OPEN 

Stateless DHCPv6:

This example is the same as the previous one with the exception of the “ipv6 nd other-config-flag” being set. This flat sets the O-bit which allows for the DNS option (or other options) to be sent to the VM in the Router Advertisement (RA).

interface Vlan22 description Provider Network trunked for C7-os-1 ip address ipv6 address 2001:DB8:CAFE:16::1/64 ipv6 nd other-config-flag standby version 2 standby 2 ipv6 autoconfig standby 2 timers msec 250 msec 750 standby 2 priority 110 standby 2 preempt standby 2 authentication OPEN 

Stateful DHCPv6:

This example is also the same as the first one with the exception of the “ipv6 nd managed-config-flag” being set. This sets the M (Managed) and O (other) bits. The M-bit indicates that the addressing comes from DHCPv6 (Not SLAAC) and that the host wants options (DNS):

interface Vlan22 description Provider Network trunked for C7-os-1 ip address ipv6 address 2001:DB8:CAFE:16::1/64  ipv6 nd managed-config-flag standby version 2 standby 2 ipv6 autoconfig standby 2 timers msec 250 msec 750 standby 2 priority 110 standby 2 preempt standby 2 authentication OPEN 

Source: Degug All | Have fun.!


Cisco IMC 3.0 includes the Redfish server management API to help automate common management tasks.  Redfish provides a scalable, secure way to manage your servers that complements the Cisco UCS Unified API that is also part of the IMC.

Programming Your Server Infrastructure

“DevOps” is a frequently used term, and the “Dev” emphasizes that a large part of any DevOps tool or process is Development.  Any type of software development requires programming, and all programming requires an application programming interface, or API, to control whatever is being programmed.  “Hello World” in your favorite programming language still uses an API (or multiple APIs) to get characters printed on the screen.


In the world of server infrastructure management, APIs are the key to managing the wide variety of physical and logical resources that make up a server.  From BIOS settings to management controller user accounts, a complete API will let you control all server resources in the automation framework of your choice.  Since release in 2009, all Cisco UCS platforms have provided a scalable, secure API that provides complete control of all aspects of server management.  Cisco UCS has also supported legacy management standards such as IPMI to allow UCS management through a wide variety of management tools.


As part of a push to modernize standards like IPMI, the Distributed Management Task Force (DMTF) has developed the Redfish server management standard.  Redfish helps secure server management with all traffic passed over the web standard application layer protocol HTTPS on the standard HTTPS port.  Redfish also specifies use of a RESTful API, and more than just specifying a RESTful API Redfish also specifies use of the Open Data Protocol, or OData, RESTful API standard.  OData is used by many enterprise software application suites and helps ensure interoperability of “RESTful” APIs.


Comprehensive Management: Redfish and the Unified UCS API

While Redfish provides a way to query server resources and perform some common actions like server power on/off, operations such as storage controller configuration are not currently part of the specification.  Fortunately, the IMC’s UCS Unified API can be used to complement the Redfish API and provide additional functionality where needed.  Like Redfish, the UCS API is built on top of an object model with objects organized into a tree structure.  The UCS API also uses HTTPS and even the URLs used to access resources are similar between the Redfish and UCS API (IPs, ports, and user credentials are the same with either API).  When programming operations are performed in a higher-level scripting language like Python, many of the specific API differences are abstracted away. 


Cisco is also actively working to extend the capabilities of Redfish.  With Cisco IMC 3.0, extensions are provided for firmware update and IMC management including technical support downloads, IMC configuration backup, and IMC configuration restore.  Visit this GitHub repository to see a Python scripting example of both APIs in use with a common configuration file.


Also, check out this demo video which shows how the Redfish and UCS APIs can be used together for complete server management.


For additional information on Cisco IMC 3.0 and Redfish, visit the following sites:

We are pleased to announce the release of C-Series Standalone IMC (Integrated Management Controller) Software 3.0(1c), which is now available for download on The team has been hard at work on a portfolio of new and innovative features that have been bundled into this release. Key features packaged in this release include:


  • HTML5 WebUI (for M4 Servers): The WebUI has been updated eliminating dependencies on flash and providing an updated user interface and exeperience.
  • HTML5 vKVM (for M4 Servers): A new alternative to the Java vKVM, the new HTML5 vKVM provides remote console connectivity with enhanced functionality including chat, embedded server power controls and screen capture capabilities.
  • XML API Transaction Support: Building on top of the existing XML API, this release will introduce a new method (configConfMos) for these systems that will allow users to make multiple configuration changes in a single transaction. This will improve efficiency of the API while not sacrificing functionality.
  • Redfish Support: These systems will also introduce support for Redfish, a DMTF standard. Support for this new standards-based API will extend the programmatic capabilities of these systems which adheres to v1.01 of the specification delivering capabilities including inventory queries and power control.
  • Multi-Language Support: These systems deliver localized language support in the WebUI with support for Chinese, English, Japanese, Korean, Russian and Spanish.


This release includes a number of other important features including:

· BIOS profiles (One-Touch Configuration)

· One click hardware inventory collection

· Cisco IMC asset tag configuration

· One-time boot (supports precision boot devices)

· Local / LDAP user search priority

· Power-on password

· Cisco IMC IP whitelisting

· Smart SSD data reporting


IMC Utilities Update Features:

· Separation of Server Configuration Utility (Separate Diagnostics & Deployment Tools)

· Driver Update Utility (Linux)


Platform Availability

Cisco C-Series Standalone IMC Software version 3.0(1c) is available for the following platforms:

· Cisco UCS C220 M4

· Cisco UCS C240 M4

· Cisco UCS C460 M4

· Cisco C3160

· Cisco C3260

· Cisco UCS C220 M3

· Cisco UCS C240 M3

· Cisco UCS C22 M3

· Cisco UCS C24 M3


Additional Information

Key information for Cisco C-Series Standalone IMC Software includes:

Cisco IMC Software 3.0 Release Notes

Cisco IMC Software Installation and Configuration Guides

Cisco IMC Programmers Guide

Cisco IMC Release Bundle Contents

The next generation rack infrastructure is now orderable in CCW, including new Cisco R42612 Rack and RP Series Metered Input Power Distribution Units (PDUs).  The new rack and PDUs are optimized for Cisco UCS platforms and compatible with Cisco UCS Integrated Infrastructure.

R42612 and MI PDUs.jpg

Cisco R42612 is a 1200mm deep 42U industry standard 19-inch rack. With a deeper design than the previous generation R42610 rack, it provides ample room for power, cooling, cable management and serviceability. The new rack comes in two models for installation at customer site:



PID Descriptions


Cisco R42612 standard rack, with side panels for single and end-of-row deployments


Cisco R42612 expansion rack, no side panels for multiple rack deployments



The RP Series Metered Input PDUs feature phase and section metering, remote monitoring through network communication, ±1% billing grade accuracy, an advanced LCD pixel display, IEC outlet grip plug retention, and a high operating temperature. The PDUs come in seven models:



PID Descriptions


24A Metered Input 1-Phase 4x C19, 8x C13 - 1U Mount PDU


48A Metered Input 3-Phase 12x C19 –

2U Mount PDU


24A Metered Input 1-Phase 6x C19, 36x C13 - 0U PDU


24A Metered Input 3-Phase 6x C19, 30x C13 - 0U PDU


48A Metered Input 3-Phase 12x C19, 9x C13 - 0U PDU


32A Metered Input 1-Phase 6x C19, 36x C13 - 0U PDU


32A Metered Input 3-Phase 12x C19, 12x C13 - 0U PDU


For more information:


Cisco R42612 Rack Datasheet

Cisco RP Series Metered Input PDUs Datasheet

Spec Sheet


IMC Supervisor Demo Videos

Posted by devikuma Oct 17, 2016

Cisco IMC Supervisor enables centralized management of Standalone C-Series Rack Servers , Storage Rack Servers and E-Series rack servers .


IMC Supervisor is a light weight virtual appliance supported in VMware Hypervisor and Microsoft Hyper-V. It provides features and capabilities in key areas such as Discovery, Inventory, Management, Maintenance, Monitoring and Support.


In this video series, we are going to see how simple and easy  it is to manage stand alone rack servers at scale using Cisco IMC Supervisor.


This video series will take you through the use case where Customer has procured a few hundred rack servers and how IMC Supervisor could enable Day 0 operations  as the servers are getting racked up and powered on , Day 1 operations  as the servers are configured and provisioned and Day 2 operations  where the servers need to be monitored and maintained.


IMC Supervisor - Day0 Operations

IMC Supervisor - Day1 Operations

IMC Supervisor - Day2 Operations




In September 2016, Cisco released UCS Manager 3.1(2). It is available for download on Cisco’s website. UCS Manager 3.1(2) has a number of key software and hardware support enhancements.


From a hardware perspective, the biggest enhancement is support for the C3260 High Capacity Storage server under UCS Manager. The C3260 is a single or dual-server, high density, bare-metal, x86-based enterprise storage server. With UCS Manager 3.1(2), UCS now offers chassis profiles and chassis firmware package management in addition to the storage profiles that were previously offered and are widely used with C3260 servers. In addition, existing C3260 users can use them with UCS Manager 3.1(2) and retain their current disk configuration to get policy-based management without having to re-configure the server. C3260 servers are often used in big data, cloud, object storage, and content delivery workloads where servers with a large number of drives are helpful.


UCS Manager 3.1(2) also adds support for a large number of additional servers and server accessories. These include B420 M4, B260 M4, B460 M4, and C460 M4 servers with the Intel E5-4600 v4 or Intel E7-8800 v4 series CPUs.  In addition, there is support for additional network adaptors, NVMe PCIe cards and SSDs, drives, SAS HBAs, Secure Encrypted Drives, NVIDIA GPUs, and more. For details on all the additional hardware that is supported in UCS Manager 3.1(2), please see the UCS Manager 3.1 Release Notes at


In addition to the many new hardware options, there are also a large number of software enhancements offered in UCS Manager 3.1(2). Most of these enhancements improve the operation of UCS Manager including factory reset of servers, vNIC redundancy pairs to keep redundant vNICs in sync, power management enhancements, and much more. However, the most visible changes are with the UCS Manager 3.1(2) HTML 5 GUI. It now has icons down the side of the left hand navigation pane instead of tabs across the top, you can close the left hand navigation pane by clicking on the icon, improved topology maps, and colors and icons that are more consistent across the UCS portfolio. For additional details on all the new software enhancements in UCS Manager 3.1(2), please see the UCS Manager 3.1(2) Release Notes.


UCS Manager 3.1(2) is available today on Cisco’s website.

September 20th, 2016

Cisco is pleased to announce the release of IMC Supervisor 2.1.


Cisco IMC Supervisor is a management system that enables management of up to 1,000 Standalone C-Series and E-Series Servers.


New in the v2.1 release, IMC Supervisor delivers the following enhancements:

  • Support for UCS C3260 Storage Server
  • Enhanced Scheduler Capabilities
  • Support for Multiple Diagnostic Images
  • E-Mail Notification Enhancements (per server)
  • Enhanced CSV Import
  • Clear User option
  • Multi-KVM Launch
  • New Network Policy (Support of UCS C3260 Storage Server)
  • Updated Drive Configuration Options
  • Policy/Profile Deployment Scheduler
  • REST API Enhancements
  • Profiles & Policy Framework for UCS C3260 Storage Server


The release is now available for download on


IMC Supervisor 2.1 Software Download

IMC Supervisor 2.1 Release Notes

IMC Supervisor 2.1 Installation Guide

IMC Supervisor 2.1 Management Guide




IMC Supervisor does implement license enforcement. The license structure includes a base license (per instance of IMC Supervisor) and a required secondary license tied to the number of systems under management. New in IMC Supervisor is the advanced license offering that enables policy-based configuration. Support is available and tied to the number of managed systems (endpoints). The license and support PIDs are below. 



Cisco is pleased to announce the release of IMC Supervisor v2.0

Screen Shot 2016-04-10 at 7.53.23 PM.png

Cisco IMC Supervisor is a management system that enables management of up to 1,000 Standalone C-Series and E-Series Servers.


New in the v2.0 release, IMC Supervisor delivers the following enhancements:

  • Northbound API implementation (Phase 1)
  • Smart Call Home support
  • IMC Supervisor automated patch download
  • Policy delete override
  • LDAP group to role mapping support
  • Supervisor manual HUU image upload
  • Server utilization statistics collection
  • Scheduler (discovery & firmware updates)
  • TechSupport file management
  • Non-interactive diagnostic suite management
  • FlexFlash policy support



The release is now available for download on

IMC Supervisor 2.0 Software Download

IMC Supervisor 2.0 Release Notes

IMC Supervisor 2.0 Installation Guide

IMC Supervisor 2.0 Management Guide

IMC Supervisor REST API Getting Started Guide

IMC Supervisor REST API Getting Cookbook






Hardware Health Status (Monitoring)



Platform Hardware Inventory



Platform Management with vKVM Launcher



Firmware Inventory + Management



Call Home (E-mail Alerts)



Cisco Smart Call Home (TAC Notification)



Platform Grouping & Tagging



Group Discovery



Scheduling (Discovery & Firmware Management)



Non Interactive Diagnostic Tool Integration



Server Utilization Stats Collection (C220 M4 & C240 M4)






Policy Import



Policy Deletion



Policy Deployment (Single Server/Multiple Servers)



HW Profile Deployment (Single Server/Multiple Servers)




IMC Supervisor does implement license enforcement. The license structure includes a base license (per instance of IMC Supervisor) and a required secondary license tied to the number of systems under management. New in IMC Supervisor is the advanced license offering that enables policy-based configuration. Support is available and tied to the number of managed systems (endpoints). The license and support PIDs are below.

Screen Shot 2016-04-10 at 7.50.18 PM.png

Hi everyone,


I am new to posting stuff around here so I will start by being curious!


What Size of environment are you running and with what tools?


The goal of this is to get a feel of you and what you have.


For me here it is:


Working for CGI for 11 years

I have 3 terrific team members


The cisco environment is mainly ucs (we still have a few servers to get rid of)


6 Domains on two sites

160 blades

mainly running VMware and Wintel (10K +- VM, 130 hosts and around 30 physicals)


We are using UCS central and are slowly pushing all our policies in there. (just migrated to 1.4.1


Our VMware environment is mainly 5.1 and migrating to 6.1 and 5.5



Adopt the Power of Unification, Innovation, and Security with UCS Manager 3.1






Cisco is pleased to announce the release of the next version of UCS Manager, UCS Manager 3.1(1e). It is now available for download on


UCS Manager 3.1 Key Enhancements


UCS Manager 3.1(1) has a number of key updates. These include:

  • Unified Release for 6200 series, 6324, and 6332 series Fabric Interconnects
  • HTML 5 based User Interface
  • “On Next Reboot” Maintenance Policy
  • VIF/interface status check after firmware upgrade reboot
  • Option to exclude specific server components from host firmware packages
  • Support for 6332 series Fabric Interconnects including 40Gb Ethernet and 16 Gb Fibre Channel support
  • Support for M-Series cartridges with Intel Xeon E3 v4 Processors
  • Support for a second chassis in a UCS Mini configuration
  • Support for NVIDIA M6 (B-Series) and M60 (C-Series) GPUs
  • Support for the Magma PCIe Expansion Chassis with K1, K2, K40 and K80 GPUs
  • Support for PCIe Based Storage Accelerators
  • Support for an Intel Crypto Card when used with MITG software
  • And much more


This will allow UCS customers to simplify their UCS environment will support for UCS Mini, M-Series, and classic UCS platforms in the same UCS Manager release. In addition, it provides support for the latest Fabric Interconnect technology allowing 40 Gb Ethernet and 16 Gb Fibre Channel to increase networking speed in the data center. Finally, there multiple hardware options allow customers to use their environments for additional workloads.


There are additional new features, not all of which may be noted here. Please be aware that as we have been communicating in the past, this is the first major release to drop support for older UCS hardware such as first generation servers and Fabric Interconnects. For the most detailed information, please see the UCS Manager 3.1(1) Release Notes.


HTML 5 User Interface

UCS Manager first released a HTML 5 User Interface with UCS Manager 3.0(2) for UCS Mini. UCS Manager 3.1(1) brings that user interface to all UCS Manager 3.1 supported platforms. The look and feel of the interface is almost the same as with the Java interface to make it easy to transition for existing customers. While UCS Manager and the UCS KVM Manager are both HTML 5 based, the KVM client still requires a Java client. Finally, the Java client for UCS Manager and the UCS KVM Manager are also still available.


On Next Reboot Maintenance Policy

Another customer requested user enhancement is for a new UCS maintenance policy called On Next Reboot. In the past, UCS Manager has supported User Acknowledgement, Scheduled, and Immediate maintenance policies. The On Next Reboot policy allows the UCS Manager server administrator to prepare a change that requires a server reboot – such as a firmware update – but not have to do the user acknowledgement and manually trigger the server reboot. Instead, UCS Manager waits until the server OS or Hypervisor reboots. That reboot would be triggered externally, such as by an Operating System administrator during their maintenance activities. When the OS reboots, the UCS changes, such as the firmware update, would take effect.


Additional Hardware Support

Among the many other things supported in UCS Manager 3.1(1) is support for a lot of additional hardware. This includes support for a number of server accessories including additional GPU and Flash Storage devices. More information is available in the announcement blog on


Release Availability


UCS Manager 3.1(1e) is now available for download on


Key release information for UCS Manager 3.1(1) includes:

UCS Manager 3.1 Release Notes

UCS Manager User Guides

UCS Manager Release Bundle Contents


Finally, if you are at Cisco Live in Berlin, please come by the UCS demo to get more information.



Improve your UCS Management at scale


UCS Central Dashboard.jpg

In December 2015, Cisco released UCS Central Software 1.4 Release. It is available for download on Cisco’s website. There are many enhancements to this release of the manager of UCS Managers.


There are 4 key areas of improvements in this release. They include:

  • • Enhancements to the HTML 5 User Interface, including making it the default user interface
  • • Support for new hardware including the UCS 6332 Fabric Interconnects
  • • Support for Cisco Smart Licensing including a per server licensing model
  • • Many policy enhancements including centralized configuration of Fabric Interconnects and support for many additional policies.


All of this can be done while managing 10,000 UCS servers from a single IP address, giving you policy-based management at scale. As always, additional details on the features and other release details are available in the UCS Central 1.4 Release Notes.


These enhancements provide customers with an enhanced user experience that was designed to better manage large numbers of servers at scale. It also supports the latest hardware options and UCS Fabric Interconnects allowing customers to get the highest levels of performance from their environment.



HTML5 User Interface Improvements


UCS Central 1.3 introduced a new search and task based interface. We had plans to continuing development beyond that first release and also got great customer feedback for additional enhancements and improvements that were implemented in UCS Central 1.4. The result is that the HTML 5 User interface is now the default interface for UCS Central. The older Flash-based user interface is still available, but is being deprecated and did not get any of the new features introduced into UCS Central 1.4.


Enhancements and new features in the UCS Central 1.4 HTML 5 User Interface include:

Support for vNIC and vHBA templates in addition to LAN Connection Policies and SAN connection policies, a significant restriction in the HTML 5 interface in UCS Central 1.3.

  • • Many new and improved dashboard widgets including a new Getting Started widget and a new Inventory Status widget
  • • A KVM Unified Launcher and a KVM User role
  • • Improved Table Export capabilities
  • • Improved management of multiple vLAN permissions
  • • Visibility of UCS Manager FSM information in the UCS Central interface
  • • And much more.


With the enhancements to the user interface, it will make it easier for both existing and new users to manage their UCS multi-domain environments at scale.



New Hardware Support

UCS Central 1.4 was pre-enabled to support UCS Manager 3.1(1), release in January 2016. That includes support for the new Third Generation Fabric Interconnect infrastructure including UCS 6332 Fabric Interconnects, 2304 IO Modules, additional VIC adaptors, and more. This allows user to support the latest enhancements to get the highest levels of performance in their environment, enabling additional workloads.


Cisco Smart Licensing

Since UCS Central 1.1, UCS Central has been licensed on a per UCS domain basis. Starting in UCS Central 1.4, customers have the option to license UCS Central 1.4 through either a per domain license or through per server licensing using Cisco Smart Licensing. For customers who want to move to per server licensing, there is a method to convert your per domain licenses to per server licenses through the Cisco Smart Licensing portal. This allows customers to select the optimal licensing model for their environment.


Policy and Configuration Enhancements

The fourth major area of improvements in UCS Central 1.4 is around policy and configuration enhancements. This includes the ability to configure Fabric Interconnect ports and related policies from UCS Central 1.4. In addition, there is support for many additional policies including policies such as equipment tab policies, adapter policies, advanced host firmware packs, vNIC policies, PVLAN, and improved storage profile support.


Finally, there are a number of new features to address customer requests. These include Smart Call Home support, SNMP alerts, RADIUS and TACACS support, direct attach storage support, and pre-enabled support for the UCS 6332 Fabric Interconnects. This allows customers who use any of these features in UCS Manager to fully transition to UCS Central in a multi-domain environment.


While this is a great summary of the new features in UCS Central 1.4, there is a lot more features and details of those features. One of the great places to check out additional features is the UCS Central 1.4 Release Notes.


UCS Central 1.4 Release Availability

The release is now available for download on For existing customers, upgrades are simple. Download the ISO format, reboot the UCS Central virtual machine with the downloaded ISO image, select upgrade, and a few minutes later, the upgrade is finished.


Finally, if you want to learn more about UCS Central, you can find great UCS Central labs on Cisco’s dCloud environment.


We think that UCS Central 1.4 is a great release for existing UCS Central customers as well as for customers who are looking to add multi-domain management to their existing UCS environment.


Finally, if you are Cisco Live in Berlin during February, please come by the Cisco booth to get more details on UCS.


UCS Performance Manager 2.0 is now available for download


Built with technology from ZENOSS, Cisco UCS Performance Manager (UCSPM) is a Performance Monitoring and Capacity Planning tool custom built for UCS Administrators.  UCSPM provides visibility into UCS Hardware, LAN/SAN Switching, Storage Arrays, Hypervisors and Server Operating Systems - all from a single console.

UCS Performance Manager 2.0 features a new underlying platform built for High Availability and Scalability. This new release will allow deployment of multiple virtual appliances that can be pooled together, with the ability to scale the UCS Performance Manager application  requirements across these pooled resources.


New Software Features include:

  • Central Authentication using LDAP
  • HTML5 User Interface
  • Scalability to 2500 UCS Servers
  • Configurable data collection interval (default = 5 minutes)
  • Northbound JSON API for integration and automation
  • Capacity forecasting (forecast and alert on future thresholds)
  • Dependency Views (pivot on a resource and identify relevant dependents/dependencies)
  • Support for UCS Central and UCS Mini


UCS Performance Manager 2.0 is a free upgrade for existing customers (UCSPM 1.1.x license files are supported in UCSPM 2.0).   Existing Performance, Modeling and Event data can be migrated from UCSPM 1.x deployments to a UCSPM 2.0 instance.


UCSPM 2.0 contains a built-in 30-day eval - Download from today.


For all related downloads (documentation, software, etc..), see this Communities Document:   UCS Performance Manager 2.0

With the landscape of desktop and application virtualization constantly changing, the new Cisco UCS M-Series Modular servers offer a low cost, small foot print solution for high density XenApp workloads.

The UCS M-Series servers are composable infrastructure that disaggregate storage and networking from the CPU/memory allowing workloads to be optimally matched to resources.

Join Cisco’s desktop virtualization technical marketing team on Thursday December 10 at 8:00 PST/11:00 EST for a BrighTALK webinar where we’ll talk about how you can support 960 XenApp users in a 2RU configuration with excellent performance.



Register now

As we all know, hyper-converged storage is a software-defined approach that is either embedded in a hypervisor or based on a controller VMware appliance. The benefit of this type of approach to storage management is that it combines compute, storage, networking, and virtualization in one managed system. 


In this new release, VMware Virtual SAN 6.0 introduces support for an all-flash architecture. With support for both hybrid and all-flash configurations, this solution delivers enterprise-level scale and performance. Scalability is doubled, with up to 64 nodes per cluster and support for up to 200 virtual machines per host. VMware Virtual SAN 6.0 is ready to meet the performance demands of just about any virtualized application by delivering consistent performance with sub-millisecond latencies in both hybrid and all-flash architectures.


This technology is appealing to data center managers and administrators because it provides superior performance and efficient manageability. A key factor in this solution is a distributed storage architecture with direct attached storage (DAS) from an individual physical Cisco Unified Computing System (Cisco UCS) Rack Server. This gives data center staff greater control in provisioning storage in a virtualized environment from a single pane of management from Cisco UCS Manager.


Data center managers also have the option to use an automated workflow-based provisioning with Cisco UCS Director. Cisco UCS Director improves consistency and efficiency while reducing delivery time from weeks to minutes. It accomplishes this by replacing time-consuming, manual provisioning, and de-provisioning of data center resources with automated workflows. As workload requirements increase, data center staff can scale the deployment out horizontally by adding more capacity in existing hyper-converged storage nodes, or they can add more nodes. They can also use a combination of existing and new nodes.


Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 Architecture




Here are some key points of note for this reference architecture:


  • For this deployment, Cisco UCS firmware version 2.2 and later is compatible for the direct connection of a Cisco UCS C-Series Rack Server to a Cisco UCS Fabric Interconnect. This allows Cisco UCS B-Series Blade Servers and Cisco UCS C-Series Rack Servers to be managed under the same Cisco UCS domain and within a single pane of management.
  • Cisco UCS Director can help orchestrate and deploy VMware Virtual SAN with automated workflow customized for VMware Virtual SAN deployment on Cisco UCS hardware.
  • This solution is scalable by adding more disks to existing Cisco UCS C240 M4 servers or by adding more servers. Here is a complete list of Cisco UCS Rack Servers certified for VMware Virtual SAN.


The reference architecture shown above was built using the Cisco UCS VMware VSAN Ready Node. Cisco tested VMware Horizon 6 with View linked-clone virtual desktops with LoginVSI 4.1.3 to measure the solution’s performance.


The VMware vRealize Operations for Horizon provides the capability to monitor and manage the solution’s health, capacity, and performance of the View environment. The VMware Virtual SAN Observer captures performance statistics and provides access to live measurements for storage resources utilization.

Cisco UCS with VMware Virtual SAN running VMware Horizon 6 with View Deployed Desktops



isco UCS and VMware Virtual SAN Network Configuration Best Practices


Here are a few recommended best practices:


  • The VMware Virtual SAN environment requires that multicast be enabled for virtual storage area network (VSAN) traffic. To achieve this, a multicast policy must be created as shown in the screenshot below:


  • As a best practice recommendation, the default gateway for VMware Virtual SAN traffic IP address subnet for IGMP Snooping Querier should be defined.
  • The recommended configuration steps for Nexus 1000V with VMware Virtual SAN can be found here.
  • The minimum software version requirement for Cisco UCS C220/C240 M4 and VSAN 6.0 is as follows:


Test Results and Key Takeaways:


For the reference architecture, Cisco performed five different tests with various scalability capabilities for up to 1000 VMware Horizon 6 with View deployed linked-clone desktops. The tests were based on real-world scenarios, user workloads, and infrastructure system configurations. Highlights of the test results include: 


  • This solution successfully achieved a linear scalability from 500 desktops on four nodes to 1000 desktops on eight nodes with VMware Virtual SAN data store latency under 15ms according to VSAN Observer performance data.
  • This solution guarantees optimal end-user performance for practical workloads. The test results showed an average of less than 15ms latency with standard office applications in various failure scenarios as measured during the study. These scenarios included SSD failure, HDD failure, and node failure.
  • For optimal performance and to minimize failure, the domain disk group design and sizing are key factors.
  • The solution also provides proven resiliency and availability, with high application uptimes.
  • IT efficiency is improved with faster desktop operations throughout this deployment.


To learn more about the complete system configuration, full test results, and recommended best practices, download this white paper: Cisco Unified Computing System with VMware Horizon 6 with View and Virtual SAN.



We would love to hear your thoughts on this article. Feel free to post your comments below as well as sharing the article within your social networks.