• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Wire and Cable Tips

...
...
  • Basics of Wire and Cable
  • Industry News & Events
  • Blog
  • Markets
    • Automation
    • Construction/Off-Highway
    • Electronics
    • Food/Beverage
    • Medical/Pharmaceutical
    • Military/Aerospace
    • Offshore/Marine
    • Oil/Gas
    • Renewables/Energy Production
    • Robotics
    • Semiconductor
  • Products
    • Data Transmission
    • Networking
    • Power Transmission
  • Suppliers
  • Video
  • DesignFast

Inside data centers: Cabling considerations for new FibreChannel links

July 9, 2021 By Ed Cady Leave a Comment

A Fibre Channel (FC) storage area network (SAN) and Inter-Switch-Link (ISL) interfaces are an important part of modern data-center systems, including in hyperscale or enterprise ones. FC SAN is a high-speed network that connects servers and storage devices, and the Inter Switch-Link joins and maintains the traffic flow between switches and routers.

A benefit to FC is its reliability. It typically runs faster per lane with lower latency than Ethernet or SAS interfaces. As a result, it has survived the competition for more than two decades

Typically, cool or cold-storage applications use hard disk drives (HDD) and tape libraries that are networked with FC connectivity. Newer IO interfaces and equipment — such as non-volatile memory express (NVMe), solid-state drive (SSD) storage devices — direct data through SAN systems and Fibre Channel links (such as 1x, 4x, or 8x). NVMEe is a communications interface and driver.

An updated FC-PI-8 specification for 128GFC will be released soon. It’s expected that the new, internal twin-axial cable assemblies will work with SFP 112, QSFP 112, QSFP-DD 112, or OSFP 112 receptacle port connectors — with the cable joined to a PCB connector near the controller chip or possibly directly to the chip packaging.

This would lower the cost of the blade board and the performance of the PCB substrate. It would also allow for a slightly longer direct-attach copper (DAC) cable option for Inter-Switch Link applications.

Check out the blue fill area in the below QLogic FC image as an SFP-based example. It represents various twin-axial cable assemblies with different termination options.

 

The next-generation Gen 8 FC SAN switch might have:

  • 48 SFP 112 ports and eight QSFP 112;
  • QSFP-DD 112 ports; or
  • OSFP 112 ports.

Lenovo FC SAN Gen7 Switch

The update could offer options, such as the ability to use internal twin-axial harnesses or cables. It could also offer a hybrid solution with longer, far-end port cabled links. Any of these features would significantly reduce the baseboard cost.

However, a thermal analysis would be required to validate the internal cable performance. For example, it’s necessary to determine and test an 85° C (185° F) or 125° (257° F) twin-axial cable and jacketing for each application, under the actual conditions.

A typical layout is an above and below baseboard, with an internal twin-axial cable that’s routed from the port receptacle to either near or on the switch chip. Below is a demo box example, with a twin-axial cable that’s routed using an internal 56.1G PAM4 per pair, twin-axial harness or cable. This setup could work for Gen 8 FC SAN or Inter-Switch Links applications.

 

The FCIA’s speed rate roadmap shows the 128 GFC with a 56.1G line rate that’s of development, expected to be released over the next few years. The roadmap lists 2025 as the 256 GFC specification release date but provides no further timeframe. It’s likely the 256 GFC will take additional time to develop and validate.

FCIA speed rate roadmap

The FC Inter-Switch Link roadmap indicates that the 400 GFCoE product, with eight lanes of 26.5625 (PAM4 modulation), is complete. New products are expected shortly. It’s likely the 800 GFCoE, with a possible 53G PAM4 per-line rate, will be added to the roadmap soon.

Currently, the FCIA’s FibreChannel over Ethernet roadmap covers the 2020 market for the 400 GFCoE. It’s reasonable to question where there is uncertainty or if this is the end of the roadmap for this application.

Newer FibreChannel QSFP and QSFP-DD pluggable optical modules might use the new Sumitomo Electric AirEB, Multi-fiber Connector instead of other more costly multi-fibre push on (MPO) types. More recent FC SFP pluggable optical modules might also use CN or SN connectors rather than the larger LC types.

Final thoughts
The Fibre Channel SAM has only experienced modest growth of about 16 million ports per year during the last couple of years. It only caters to a small market segment but holds steady as a critical interconnect product family for some suppliers.

Several storage-centric and enterprise datacenters still rely on the latest generation of FC SAN and related links. However, it’s possible that the newer GenZ interface and connectivity running at 112G PAM4 per lane could compete for the cool and cold-storage network applications in the future.

What’s clear is that the 256 GFC links must be developed and fully functional by 2023 or sooner. This is partially being dictated by the T11 and FCIA organizations.

It’s worth questioning: will the 128 or 256 GFC products become part of a datacenter’s switch radix topology? Will the SFP-DD modules, cables, and connectors be used for higher density port plate configurations? Or, perhaps, the QSFP-DD be used for eight-legged breakout external DAC cables or Inter-Switch Links.

As QSFP-DD and OSFP are considered for eight-lane trunk link applications, will new 16-lane OSFP XD be required for the next-generation FC applications? Will it work well under higher-heat conditions at 106G PAM4 and 112G PAM4? And what about for the 212G PAM4, 224G PAM4?

It would seem that the FC-PI-8 active-copper cables are useful for those longer-reach copper applications. It will be interesting to find out whether new system IO accelerators eventually replace Fibre Channel SAN or ISL applications in cold-storage data-center architecture.

You may also like:


  • How cable technology is evolving to meet high-performance networking
  • NAI-Hyper Scale fiber optic ca ble Assemblies Image-1
    Fiber optic cable assemblies for hyperscale computing in data centers
  • PCIe 3.1 External Cables TE-MiniSAS-HD-cables-and-receptacle-caged-connectors
    What is the newly released PCIe 3.1 External Cable specification…
  • RCx high speed IO passive copper cabling
    What’s going on with the new RCx high speed IO…
  • light-connector-expansion-shot
    Simpler route to connecting fast fiber optics

Filed Under: Basics, Data Transmission, EE sync, Featured, fiber optic, Markets, Networking, Networking/Data centers, Products Tagged With: FAQ

Reader Interactions

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

EE TRAINING CENTER CLASSROOMS

EE Classrooms

RSS Current EDABoard.com discussions

  • IRF450 MoSFET
  • SNR input, what is it?
  • Buck Regulator (5V, how to select, automotive application)
  • Implementing a PID controller from a transfer function
  • PHY to MAC communication: issue with SMI (MDIO+MDC) bus

RSS Current Electro-Tech-Online.com Discussions

  • Adding Current Limit Feature to a Buck Converter
  • software PWM
  • Funny Images Thread!
  • Passthrough charging-simple but impossible to achieve?
  • I need a PROM CPU
Subscribe today

Footer

EE World Online Network

  • DesignFast
  • EE World Online
  • EDA Board Forums
  • Electro Tech Online Forums
  • Connector Tips
  • Microcontroller Tips
  • Analog IC Tips
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips
  • 5G Technology World

Wire and Cable Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
Follow us on TwitterAdd us on FacebookFollow us on YouTube Follow us on Instagram

Copyright © 2022 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy