Sunday, July 28, 2013

Jumbo Frame MTU on vSphere Software iSCSI Adapters

In the book "Storage Implementation in vSphere 5.0" (SIIV5) from VMware Press, I ran across a really cool piece of information. It is regarding the MTU on port groups bound to Software iSCSI Adapters in vSphere environments. It has been a discussion I have had with a handful of people previously but in a slightly different sense. Previously when discussing this topic it was a conversation which focused on setting Jumbo frames (when using iSCSI software adapter) for a 1Gb network or leaving it at the standard 1500 MTU. I have generally heard that it does not improve network performance to use Jumbo Frames on a 1Gb connection, and if it does improve performance it is negligible and not worth it. What I found while reading SIIV5 is that myself and those whom I have had this conversation with before were possibly attributing the performance gain to the wrong portion of the environment. Where we were looking for networking performance improvements, SIIV5 actually pin points the performance gains on the ESXi host CPU level. As a review for some people and maybe new information for others here are a couple bullet points to quickly summarize the different categories of iSCSI initiators you can use with ESXi, this information is key in our discussion:
  1. Independent Hardware Initiator -  This initiator functions and operates on its own outside of any need to interface with ESXi. They offload all iSCSI and network processing from the host and onto the controller. They can be managed  through their firmware and in some cases through the vSphere UI.
  2. Dependent Hardware Initiator -  This initiator also can offload iSCSI and network processing from the host onto the controller, but dependent initiators depend on the ESXi host for  the network stack, configuration of the initiator and management through the commandline or the vSphere UI. Because it does has dependencies on the host, its offload capabilities are not an assumed function but is made possible through the use of TCP Offload Engine to move the processing of iSCSI and networking to the controller. Wiki Article about TOE. Requires that a vmkernel port group be created and bound to vmnic.
  3. Software Initiator - ESXi provides the software initiator as a component of the vmkernel. It requires ESXi to operate and can only be configured from either the commandline or the vSphere UI. Requires that a vmkernel port group be created and bound to vmnic.
It goes without saying that because the Software iSCSI adapter has no dedicated hardware, unlike an Independent or Dependent iSCSI adapter, they lack the off loading capabilities of non-software iSCSI adapters. This can potentially put more stress on the ESXi host's CPU as it has the need to handle every datagram, fragment payloads, and recompile payloads. Because of no dedicated hardware controller backing, VMware in SIIV5 recommends to always set the MTU on Software iSCSI port groups to 9000 (Jumbo Frames), this will improve performance by minimizing the load on the ESXi hosts. This improvement manifests itself as the CPU will no longer need to process so many datagrams as it begins working with larger network payloads. As stated in SIIV5:
"To compensate for lack of offloading capabilities of the iSCSI SW initiator, enabling Jumbo Frame can significantly I/O throughput." SIIV5 pg.140, "Configuring SW Initiator with Jumbo Frames"
The question about using Jumbo Frames for 1Gb networking of not no longer exists in my mind. When using a Software iSCSI Initiator in vSphere, Jumbo Frames is always the WAY-TO-GO! Now although I do not profess to be a storage or networking Guru, I hoped to pass on this information so that others who may be wondering about these topics have a good place to start learning!