Since my background is in traditional networking, it is always interesting to talk with clients about SAN technologies—both where they have been and where they are going. We often discuss native Fibre-channel (FC) protocols mixing with Fibre-channel over Ethernet (FCoE) and the future of end-to-end FCoE. Certainly 16Gb FC offers increases in performance over existing 10Gb FCoE, but will that hold true in the future? Does 16Gb FC warrant getting away from an existing FCoE infrastructure or do we press on and buy into the future of Ethernet and pushing the FCoE limits?

Cisco’s MDS Family

 

Before jumping into that question, we need to take a look at some hardware updates. Cisco recently introduced its new MDS lineup, consisting of a 9700 Multilayer Director Switch along with a 9250i Multiservice Fabric Switch. Both seem to more closely resemble the Nexus family than the traditional Catalyst family, which gives hope for some additional future scalability.

 

 

MDS 9710

 

The MDS 9710 has configurable redundancy with 1+1 supervisor redundancy, grid power redundancy, and N+1 fabric redundancy. One nice feature is the half-width supervisor slots that reduce the number of full-width slots needed. The new supervisors also quadruple the RAM and CPU, compared to the existing MDS 9500 supervisors. The 9710 also supports multiple protocols, including FC and FCoE, though FCoE will come at a future date when support is released for terminating FCoE on line cards.

 

Along with the 9710 chassis and Supervisor 1, Cisco introduced a 48-port autosensing 16Gb FC line card. It autosenses to 2/4/8Gb FC or 4/8/16Gb FC, based on the type of SFP inserted and uplink speeds. It is also important to note that the card is non-blocking, offering 48 16Gb FC ports with no oversubscription. I think it is safe to say that the new MDS 9710 was built for today’s data centers and with the future data center in mind.

 

 

MDS 9250i

 

The Multilayer MDS 9250i is an impressive small form switch. It is configurable up to 40 16Gb FC ports, 8 10Gb FCoE ports, and 2 1/10Gb IP storage services ports in a 2 RU footprint. It offers a 2+1 hot-swappable redundant power configuration, as well as hot swappable fan trays.

 

The 9250i also supports multiple protocols including FC, FCoE, FCIP, and iSCSI—affording SAN engineers the ability to have multiprotocol arrays. The 2950i also offers advanced traffic management and robust security, along with some additional compression capabilities for storage replication using FCIP.

 

FC vs. FCoE

 

Now that we have the hardware refresh out of the way, let’s get into the conversation that interests most people: How does 16Gb FC compare to 10Gb FCoE? And what do we think the future will hold for both protocols and scalability? Well, the Fibre Channel Industry Association did a test. Check out the results in Table 1.

 

Table 1: FCIA Bandwidth Test

 

SpeedClocking (Gbps)Encoding (Data/Sent)Data Rate (MBps)
8GFC8,5008b/10b1600
10GFC10.564b/66b2400
10GFCoE10.364b/66b2400
16GFC14.02564b/66b3200
32GFC28.05064b/66b6400
40G FCoE41.22564b/66b9600
100g FCoE103.12564b/66b24000

 

 

 

In addition to the table, you can look at when the various FC speeds were introduced, with 16Gb being the latest. Announced back in 2011, it seems to be on a two to three year cycle. 32Gb FC has been announced for 2014, assuming maybe a 2015-2016 availability. FCoE became a standard in 2009 and started becoming more accepted in the 2010-2011 timeframe. Incidentally it was accompanied with a single hop maximum that introduced design constraints. Multi-hop FCoE is available today and each manufacturer has limitations based on the product set and the number of hops they will support along with the distance. Native FC can span numerous hops and even distances.

 

Many of our clients have also wondered if FCoE can offer the same lossless protocol that FC does when using traditional Ethernet. This is where Data Center Bridging (DCB) comes into play and is essentially a blog post in itself. However, we’ll talk at a high level about what it introduced. DCB was created to eliminate loss due to queue overflow in the traditional Ethernet protocol. Cisco called its mechanism priority flow control, which enables a pause mechanism to specific traffic classes. Historically, these classes would get tossed into queues and then, depending on configuration, dropped or sent at specific ratios. PFC allows FC traffic to pause while other traffic is tossed into queues to free up congestion and then the sender will start sending FC traffic again. In addition to PFC, Cisco also took its Central Arbiter concept from MDS and implemented it in Nexus. The Central Arbiter is basically a credit-based buffer mechanism for quality of service to enable the lossless mechanism for FCoE.

 

The benefits are pretty clear for FCoE, especially for organizations that have already invested in Cisco FCoE infrastructure. The Nexus 5500 series has support for 40Gb today using its QSFP, which essentially breaks down into four 10Gb connections that support Data Center Bridging (DCB) or FCoE. However, the Nexus 6000 family provides 40Gb FCoE support along with 10Gb FCoE support.

 

I see the compelling reasons for FCoE when all things are considered: cabling, port counts, management points, management domains, and even siloed environments where cross-functionality between LAN and SAN teams is not always great.

 

 

Final Thoughts

 

There is no doubt that FC is here for the long term, as most enterprise organizations we work with have too much invested in it to warrant a change. Alternatively, organizations are also leveraging converged infrastructure to reduce complexity and save money. As bandwidth capabilities grow, costs grow with them. Continuing to purchase separate LAN uplinks/downlinks and SAN uplinks/downlinks will become a thing of the past, and we will rely on proven technology and protocols to control a unified fabric. Imagine if Cisco’s Unified Computing System (UCS) deploys the next version of its Fabric Interconnects using the Nexus 6K family instead of the 5K family. We would then potentially be able to push 40Gb FCoE to both the chassis, as well as 40Gb Ethernet to the LAN and maybe someday 40 or 100Gb FCoE to a storage array.

- See more at: http://blog.thinkahead.com/mds-and-the-future-of-fcoe-which-way-should-you-go#sthash.5ZGC6IKp.dpuf