For those of you who have not tested out this feature yet, I have to say it is pretty wonderful. However I have also run into some issues with it as well. For the quick and somewhat obvious gotcha it only works if the server and client both support SMB3.
In case you don’t know what it does I can give you a brief and dumbed down version of Multi-Path I/O. It allows multiple network cards to handle the SMB3 connections. By doing this it can spread the load across multiple network cards in effect increasing bandwidth and reliability. This is not the same is NIC teaming as all ports have their own IP on the network and it only works with SMB3 and any other service that supports it.
The issue I had was that even though there were 5 Gb NICs on the server the aggregate bandwidth for transfers was topping out at about 1Gb. 1 NIC was a built in Atheros and the other 4 were a single Intel PCI-x 1000 Pro. I could disable all but one NIC and get the same bandwidth as with all 5 running. It was distributing the load, about 190Mbs per NIC when all 5 were online or about 950Mbs for just one. Unfortunately this Intel NIC did not have newer drivers and only would work on the Windows Built in drivers, but replacing this NIC with a PCI-Express card did allow me to break the 1Gb barrier at 1.5Gbs with only 2 NICs. I will update this when I get more NICs installed. I suspect the issue was either the driver or some limitation of the PCI-X NIC card.
This is a good result: