At this current week’s Intel Developers Forum, IDF 2016 in San Francisco, at the PCI-SIG booth, new PCIe 3.1 products and technologies are being announced and showcased. PCIe 3.1 is a new generation External Cable Link specification developed by the PCI-SIG association and is based on the MiniSAS HD, SNIA SFF-8644 connector and cable assembly specification. These cables are used to interconnect new Fabric HBA Links and Switches based on PCIe technology in various server and storage network applications especially used within newer datacenter types, tiers and market segments. These four- and eight-lane cable assemblies can handle PCIe specification signaling rates of 8.0GT/s per lane. So PCIe 32.0GT/s and 64GT/s passive copper cable Links are now available for 1 to 6-m reaches.
Because the MiniSAS HD cable plug connector is a PCB, active copper re-timer and signal conditioning chips are mounted on it for achieving longer reach distance Link options. For Links 15 m or much longer, such as 300 m, different types of active optical engine chips are mounted on each plug PCB and connect with different MMF or SMF cable options. Another interconnect option is a MiniSAS HD pluggable Module assembly with outboard optical connector port options like MPO or MXC types. Passive separable optical cable assembly options are plugged into this Module when it is installed into the fixed receptacle port connector.
This same connector and similar cabling family are already used in high-volume, supporting the current storage systems standard INCITS T10 specification for SAS 3.0 12Gbps per lane connector and cabling. So the rapid buildout of NVMe SSD storage systems that use MiniSAS HD cabling is made easier as many end-user customers are already familiar with this interconnect cabling type. Many datacenters that have a very large amount of SAS storage systems installed sometimes use MiniSAS HD cabling to connect all of their servers and network switches for simplicity and best cost reasons.
However, there is a very important difference between these two similar cable assemblies as the PCIe MiniSAS HD cable uses two more side-band wires, usually 30 AWG, for CMI system management functionality. These wires terminate directly to the PCBs on each end and as individual circuits they do not continue through the connector plugs. If needed, polarizing key features are available with this connector system. Also the type of raw cable twin-axial dielectric can be selected for minimizing cost and still meeting PCIe 3.1 performance margin and reaches. The type of dielectrics used for 12Gbps, 14Gbps and 16Gbps per lane performance will cost more unless that cable type has industry extra high volume.
Fortunately, practically all MiniSAS HD cable assembly PCB plugs have EPROM chips mounted on each one. These are actually smart cable assemblies as these EPROM chips have various memory-mapping functionality that includes identifying itself relative to Link type, data rate, interoperability, application type, assembly ID number, cable length, wire gauge, cable manufacturer, build date and several other management interface requirements per the SFF-8636 specification standard. So even if you plug the wrong MiniSAS HD cable assembly into another port interface, the smart cable will declare itself and be removed and replaced with the correct assembly type or upgraded revision level.
Proprietary system I/O interfaces like NumaLink have used MiniSAS HD as well as for some IBTA FDR 14G, FCoE 10G, FC 16G per lane applications. It appears that MiniSAS HD interconnects will be used for the developing PCIe 16GT/s standard. Some specific PCIe 3.1 MiniSAS HD cabling applications are Data Center ToR Switch to Leaf Servers and very new converged protocol I/O Fabric networks. Volume production of PCIe Device products are inline tested with test/measurement instrumentation networks using PCIe 3.1 external cable. PCIe 3.1 Fabric networks are forecasted for growth in small and medium size datacenters as a lower cost alternative to Ethernet and InfiniBand networks.
Leave a Reply
You must be logged in to post a comment.