Saturday, January 29, 2011

Using SAN Fiber Channel network for TCP/IP, possible?

Hi,

I would like to know if it is possible to use the FC-switch (that is used for a SAN) also for TCP/IP traffic.

We have servers (virtual machines) that must have a fast network for copying much data. It would be better to use the existing FC network than a slower 1 Gb network.

Is this possible?

  • Edit - I stand corrected, well spotted pfo.

    No, you can transit FC traffic over IP (FCIP) but not the other way around sorry.

    Can you not just force the specific VMs onto the same host, nothing's as quick as a vSwitch.

    : This is true if there is only one hypervisor or when there is a connection between two VM's in the same hypervisor.
    From Chopper3
  • That is possible. It's called IPFC and RFC 2625 specifies it. You have to have an TCP/IP stack for your HBA. Tell more details, which HBAs and Switches etc.

    Chopper3 : excellent and informative answer, I still think he should try to use the vSwitch but you've taught me something today - bravo :)
    pfo : I just got to deal with a big order of CNA (FCoE) equipment and had to investigate the idea of abusing some FC switches for some serious TCP/IP pain ...
    Chopper3 : I've been very busy with CNAs (into Cisco Nexus's btw) too - loving it but really wasn't aware of the opposite option.
    pfo : Yeah, I'm looking into Nexus 4K and 5Ks for some IBM bladecenter stuff.
    Chopper3 : I'm a HP blade guy and a bit annoyed that they're not doing a 4K interconnect as they're VirtualConnect competes a bit - most of my experience is with 7Ks, not really looked at the 5Ks as much.
    : We have the following: HBA: 81Q PCI-e FC (HP AK344A) HP StorageWorks 8/20q Fibre Channel Switch (AQ233A) See also for bundle info pdf: http://h18000.www1.hp.com/products/quickspecs/12909_div/12909_div.pdf I don't see any info about TCP/IP support in the Thanks!
    Chopper3 : Just checked with my HP storage guy, it's not supported sorry.
    From pfo
  • Yes, it is possible - QLogic (at least) HBAs support IP over F/C, and many switches also support it. However, if you want to push large amounts of data across it you are in contention for bandwidth with the disks, so you may have to keep an eye on total utilisation.

    You may need to get extra licensing for your switch to support IP.

    F/C is possibly the most expensive networking technology in widespread use, so unless you are really stuck with it, you are probably better off using something else. If you have standalone servers then 10Gbit Ethernet or Infiniband is probably cheaper for the bandwidth than fibre channel.

    If you're using blades then you're pretty much stuck with whatever is supported by the blade chassis. If the HBAs in your blades support IP over F/C then you could use it. See if the chassis can be upgraded to support Infiniband.

    pfo : FCoE (fibre channel over ethernet) is also an interesting solution that a lot of people are currently looking into. Seems to me that the parent poster ist quite budget limited and that purchasing new ethernet equipment (would be the easiest) isn't an option -- this probably rules out 10GE/IB.
    JamesRyan : unless you are careful about how you go about it you can saturate the PCI bus of most machines with a couple of trunked 1Gb network connections anyway.
    pfo : Exactly, but most servers today provide multiple 4x,8x,16x PCI-E 2.0 connectors which i yet have to see getting saturated (with Gigabit-Ethernet). Keep in mind that a PCI-E 1.0 1x Link is 2GBps (4GBps duplex), with 2.0 its 4GBps (8GBps duplex) and 3.0 is 8GBps (16GBps duplex). PCI-E is a switched fabric - just put those 8x LACP trunks on different PCI-E slots and everything's gonna be fine.

0 comments:

Post a Comment