iSCSI vs FcoE – a basic review and load test for Database Workloads

We were invited to visit w/ a client to help them migrate their 11.2 database to newer hardware. Since they were re-platforming, this was a good time for them to re-think their storage platform too.
From a storage perspective, that had already had iSCSI implanted in their environment. When they asked what I thought about FcoE, the journey immediately turned into a “well let’s go see for ourselves”. The platform and storage is not important nor relevant here, so we’ll keep the innocent parties innocent.

I’m not going to bore you with FcoE and iSCSI essentials, but I’ll start with this basic info that we discussed, Here’s a bit of that dialogue before we started testing
(I’ll touch on the the key discussion points):

Fibre Channel over Ethernet (FcoE) maps Fibre Channel onto Layer 2 Ethernet and thus encapsulates SCSI frames at DLL layer

FcoE requires a converged network adapter (CNA). A CNA (typically a 10GbE interface) supports both LAN plus SCSI and Fibre Channel stacks;effectively allowing the combination of LAN and SAN traffic onto a converged (unified) network link.

If running FcoE end-2-end (all the way to the storage ports), then FcoE requires (in Layer 2) Data Center Bridging (DCB), with target and initiator on the same layer 2 segment. DCB enablement is via DCB cards and switches.

In our test scenarios, we leveraged the existing native FC storage array w/ native FC ports, i.e., no end-2-end FcoE, thus no DCB was required.
As a side note, [for those curious] DCB, a IEEE 802.1 standard enhancements, allows predictable latency and the
behavior of dropping packets upon congestion to co-exist with the SAN requirement of no loss of frames.
FCoE networks also supports Ethernet pass-through switches, whereby an Ethernet pass-through capable-switch forwards Fibre Channel frames as an upper layer
protocol. These switches are lossless have no knowledge of FC stack

iSCSI requires a network interface card and encapsulates SCSI packets in IP (layer 3)

iSCSI is implemented in hardware (iSCSI HBA with or without TCP offload engines -TOE) or software via drivers

Unlike FCoE which requires target and initiator on the same layer 2 segment, iSCSI can be on different subnets (this may not be such a good thing, but it provides flexibility). Additionally, iSCSI does not require DCB end-2-end
iSCSI can be support longer distances

Load tests
In our tests, FcoE and iSCSI, traffic was kept within the layer 2 network, thus no additional hops for iSCSI. MTU was set to 9000.
We used standard NICs w/ no offload, thus not completely apples to apples comparison between FCoE and iSCSI from CPU utilization/overhead perspective, so we kept an eye on this.

Client used Swingbench and client database workload for load tests.


  • FcoE and iSCSI had similar throughput and latency numbers at low thread/user session counts, however, as thread/user session counts increased, latency continued to be relatively low and throughput was higher in FCoE.
  • iSCSI always had higher CPU utilization as thread count increased. This can be because of the absence of TOE.

My conclusion
FcoE has traditionally been associated with enterprise grade systems, whereas iSCSI has been delegated to SMB and lesser critical apps. However, this not necessarily accurate thinking, there are many optimizations that can be made to make iSCSI highly performant. I have seen many high-end, heavy ERP workloads run atop iSCSI configurations. Typically because of the extra layers encapsulation, iSCSI can be perceived as having additional latency, add into layer 3 hops, and you have more latency.

Based on these tests and tests we have done in the past, in very high workloads or applications w/ low latency requirements FCoE seems the better choice. The storage engineer quoted [from a SNIA report] that anything in the 700MB/sec to 900 MB/sec per port -> either iSCSI or FCoE will work, but anything more demanding may require FcoE

Interestingly enough, even w/the performance and CPU utilization difference, the client decided to stay w/ iSCSI because of two key reasons – cost and cost. This Cost is the infrastructure to support the load and cost of skillet acquisition for supporting FC (esp FCoE) stacks