<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://rs-485.com/index.php?action=history&amp;feed=atom&amp;title=InfiniBand</id>
	<title>InfiniBand - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://rs-485.com/index.php?action=history&amp;feed=atom&amp;title=InfiniBand"/>
	<link rel="alternate" type="text/html" href="https://rs-485.com/index.php?title=InfiniBand&amp;action=history"/>
	<updated>2026-05-04T13:48:11Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://rs-485.com/index.php?title=InfiniBand&amp;diff=626&amp;oldid=prev</id>
		<title>RS-485: Imported from Wikipedia (overwrite)</title>
		<link rel="alternate" type="text/html" href="https://rs-485.com/index.php?title=InfiniBand&amp;diff=626&amp;oldid=prev"/>
		<updated>2026-05-02T19:05:05Z</updated>

		<summary type="html">&lt;p&gt;Imported from Wikipedia (overwrite)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Network standard}}&lt;br /&gt;
{{Infobox organization&lt;br /&gt;
|name         = InfiniBand Trade Association&lt;br /&gt;
|image        = InfiniBand Trade Association logo.jpg&lt;br /&gt;
|image_size   = 160px&lt;br /&gt;
|formation    = 1999&lt;br /&gt;
|type         = Industry trade group&lt;br /&gt;
|purpose      = Promoting InfiniBand&lt;br /&gt;
|headquarters = [[Beaverton, Oregon]], U.S.&lt;br /&gt;
|membership   = &lt;br /&gt;
|website      = {{URL|https://www.infinibandta.org/|infinibandta.org}}&lt;br /&gt;
}}{{Redirects here|IBTA|text=It could also refer to [[Ibotta]]&amp;#039;s [[ticker symbol]].}}&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;InfiniBand&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;IB&amp;#039;&amp;#039;&amp;#039;) is a computer networking standard used in [[high-performance computing]] that features very high [[throughput]] and very low [[Network latency|latency]]. It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be [[scalability|scalable]] and uses a [[switched fabric]] [[network topology]].&lt;br /&gt;
Between 2014 and June 2016,&amp;lt;ref name=&amp;quot;down&amp;quot;&amp;gt;{{cite web |url=https://www.top500.org/lists/top500/2016/06/highlights/ |title=Highlights– June 2016 |quote=InfiniBand technology is now found on 205 systems, down from 235 systems, and is now the second most-used internal system interconnect technology. Gigabit Ethernet has risen to 218 systems up from 182 systems, in large part thanks to 176 systems now using 10G interfaces. |date=June 2016 |publisher=Top500.Org |access-date=September 26, 2021}}&amp;lt;/ref&amp;gt; it was the most commonly used interconnect in the [[TOP500]] list of supercomputers.&lt;br /&gt;
&lt;br /&gt;
[[Mellanox]] (acquired by [[Nvidia]]) manufactures InfiniBand [[host bus adapter]]s and [[network switch]]es, which are used by large computer system and database vendors in their product lines.&amp;lt;ref name=&amp;quot;oracle&amp;quot;&amp;gt;{{Cite web | url= http://www.nextplatform.com/2016/02/22/oracle-engineers-its-own-infiniband-interconnects/ |title = Oracle Engineers Its Own InfiniBand Interconnects |work= The Next Platform |author= Timothy Prickett Morgan |date= February 23, 2016 |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
As a computer cluster interconnect, IB competes with [[Ethernet]], [[Fibre Channel]], and Intel [[Omni-Path]]. The technology is promoted by the &amp;#039;&amp;#039;&amp;#039;InfiniBand Trade Association&amp;#039;&amp;#039;&amp;#039;.&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
InfiniBand originated in 1999 from the merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by [[Intel]], with a specification released in 1998,&amp;lt;ref&amp;gt;{{Cite news |title= Intel Introduces Next Generation I/O for Computing Servers |author= Scott Bekker |date= November 11, 1998 |url= https://rcpmag.com/articles/1998/11/11/intel-introduces-next-generation-io-for-computing-servers.aspx |work= Redmond Channel Partner |access-date= September 28, 2021 }}&amp;lt;/ref&amp;gt; and joined by [[Sun Microsystems]] and [[Dell]].&lt;br /&gt;
Future I/O was backed by  [[Compaq]], [[IBM]], and [[Hewlett-Packard]].&amp;lt;ref&amp;gt;{{Cite news |title= Warring NGIO and Future I/O groups to merge |author= Will Wade |date= August 31, 1999 |work= EE Times |url= https://www.eetimes.com/warring-ngio-and-future-i-o-groups-to-merge/ |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as [[Microsoft]].&lt;br /&gt;
At the time it was thought some of the more powerful computers were approaching the [[interconnect bottleneck]] of the [[Peripheral Component Interconnect|PCI]] bus, in spite of upgrades like [[PCI-X]].&amp;lt;ref name=pentakalos&amp;gt;{{cite web|last1=Pentakalos|first1=Odysseas|title=An Introduction to the InfiniBand Architecture|url=http://www.oreillynet.com/pub/a/network/2002/02/04/windows.html|website=O&amp;#039;Reilly|access-date=28 July 2014}}&amp;lt;/ref&amp;gt; Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the [[Central apparatus room|machine room]], [[Cluster (computing)|cluster]] interconnect and [[Fibre Channel]]. IBTA also envisaged decomposing server hardware on an IB [[Fabric computing|fabric]].&lt;br /&gt;
&lt;br /&gt;
[[Mellanox]] had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10&amp;amp;nbsp;Gbit/second speeds.&amp;lt;ref name=timeline&amp;gt;{{cite web |title= Timeline |url= http://www.mellanox.com/page/timeline |publisher=Mellanox Technologies |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
Following the burst of the [[dot-com bubble]] there was hesitation in the industry to invest in such a far-reaching technology jump.&amp;lt;ref name=kim&amp;gt;{{cite web|last1=Kim|first1=Ted|title=Brief History of InfiniBand: Hype to Pragmatism|url=https://blogs.oracle.com/RandomDude/entry/history_hype_to_pragmatism|publisher=Oracle |access-date= September 28, 2021 |url-status=dead|archive-url=https://web.archive.org/web/20140808200954/https://blogs.oracle.com/RandomDude/entry/history_hype_to_pragmatism|archive-date=8 August 2014}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
By 2002, Intel announced that instead of shipping IB integrated circuits (&amp;quot;chips&amp;quot;), it would focus on developing [[PCI Express]], and Microsoft discontinued IB development in favor of extending Ethernet. [[Sun Microsystems]] and [[Hitachi]] continued to support IB.&amp;lt;ref&amp;gt;{{cite web |title=Sun confirms commitment to InfiniBand |date= December 2, 2002 |author= Computerwire |url= https://www.theregister.co.uk/2002/12/30/sun_confirms_commitment_to_infiniband/ |website=The Register |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2003, the [[System X (supercomputer)|System X]] supercomputer built at [[Virginia Tech]]  used InfiniBand in what was estimated to be the third largest computer in the world at the time.&amp;lt;ref&amp;gt;{{Cite news |title= Virginia Tech Builds 10 TeraFlop Computer |url= https://www.rdworldonline.com/virginia-tech-builds-10-teraflop-computer/ |work= R&amp;amp;D World |date= November 30, 2003 |access-date= September 28, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
The [[OpenFabrics Alliance|OpenIB Alliance]] (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the [[Linux]] kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel.&amp;lt;ref&amp;gt;{{cite news | title= Linux Kernel 2.6.11 Supports InfiniBand |url= http://www.internetnews.com/dev-news/article.php/3485401 |work= Internet News |author= Sean Michael Kerner |date= February 24, 2005 |access-date= September 28, 2021 }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite news| title= OpenIB Alliance Achieves Acceptance By Kernel.org |url= https://www.hpcwire.com/2005/01/21/openib-alliance-achieves-acceptance-by-kernel-org/ |work= Press release |date= January 21, 2005 |author= OpenIB Alliance |access-date= September 28, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio.&amp;lt;ref name=&amp;quot;comeback&amp;quot;&amp;gt;{{Citation | url = http://www.infostor.com/index/articles/display/248655/articles/infostor/volume-10/issue-2/news-analysis-trends/news-analysis-trends/is-infiniband-poised-for-a-comeback.html | title = Is InfiniBand poised for a comeback? | journal = Infostor |author= Ann Silverthorn |volume = 10 | issue = 2 |date= January 12, 2006 |access-date= September 28, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
Cisco, desiring to keep technology superior to Ethernet off the market, adopted a &amp;quot;buy to kill&amp;quot; strategy. Cisco successfully killed InfiniBand switching companies such as Topspin via acquisition.&amp;lt;ref&amp;gt;{{cite web |last1=Connor |first1=Deni |title=What Cisco-Topspin deal means for InfiniBand |url=https://www.networkworld.com/article/863883/data-center-what-cisco-topspin-deal-means-for-infiniband.html |website=Network World |access-date=19 June 2024 |language=en}}&amp;lt;/ref&amp;gt; {{Citation needed|reason=Given citation doesn&amp;#039;t support the allegation|date=August 2024}}&lt;br /&gt;
&lt;br /&gt;
Of the top 500 supercomputers in 2009, [[Gigabit Ethernet]] was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand.&amp;lt;ref&amp;gt;{{cite web |last1= Lawson |first1= Stephen |title= Two rival supercomputers duke it out for top spot |url= https://www.computerworld.com/article/2521602/two-rival-supercomputers-duke-it-out-for-top-spot.html |date= November 16, 2009 |work= Computerworld |access-date= September 29, 2021 |archive-date= September 29, 2021 |archive-url= https://web.archive.org/web/20210929213924/https://www.computerworld.com/article/2521602/two-rival-supercomputers-duke-it-out-for-top-spot.html |url-status= dead }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
In 2010, market leaders Mellanox and Voltaire merged, leaving just one other IB vendor, [[QLogic]], primarily a [[Fibre Channel]] vendor.&amp;lt;ref&amp;gt;{{cite web|last1=Raffo|first1=Dave|title=Largest InfiniBand vendors merge; eye converged networks|url=http://itknowledgeexchange.techtarget.com/storage-soup/largest-infiniband-vendors-merge-eye-converged-networks/|access-date=29 July 2014|archive-date=1 July 2017|archive-url=https://web.archive.org/web/20170701002647/http://itknowledgeexchange.techtarget.com/storage-soup/largest-infiniband-vendors-merge-eye-converged-networks/|url-status=dead}}&amp;lt;/ref&amp;gt; &lt;br /&gt;
At the 2011 [[International Supercomputing Conference]], links running at about 56 gigabits per second (known as FDR, see below), were announced and demonstrated by connecting booths in the trade show.&amp;lt;ref&amp;gt;{{cite news |url= http://www.cio.com/article/684732/Mellanox_Demos_Souped_Up_Version_of_Infiniband |title = Mellanox Demos Souped-Up Version of InfiniBand |work= CIO |author= Mikael Ricknäs |date= June 20, 2011 |url-status=dead |archive-date= April 6, 2012 |archive-url= https://web.archive.org/web/20120406182103/http://www.cio.com/article/684732/Mellanox_Demos_Souped_Up_Version_of_Infiniband |access-date= September 30, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
In 2012, Intel acquired QLogic&amp;#039;s InfiniBand technology, leaving only one independent supplier.&amp;lt;ref&amp;gt;{{cite news | url = https://www.hpcwire.com/2012/01/23/intel_snaps_up_infiniband_technology_product_line_from_qlogic/ | title = Intel Snaps Up InfiniBand Technology, Product Line from QLogic |work= HPCwire |author= Michael Feldman |date= January 23, 2012 |access-date= September 29, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although within two years, [[10 Gigabit Ethernet]] started displacing it.&amp;lt;ref name=&amp;quot;down&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2016, it was reported that [[Oracle Corporation]] (an investor in Mellanox) might engineer its own InfiniBand hardware.&amp;lt;ref name=&amp;quot;oracle&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2019 [[Nvidia]] acquired [[Mellanox Technologies|Mellanox]], the last independent supplier of InfiniBand products.&amp;lt;ref&amp;gt;{{Cite news |title= Nvidia to Acquire Mellanox for $6.9 Billion |date= March 11, 2019 |work= Press release |url= https://nvidianews.nvidia.com/news/nvidia-to-acquire-mellanox-for-6-9-billion |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Specification ==&lt;br /&gt;
Specifications are published by the InfiniBand trade association.&lt;br /&gt;
&lt;br /&gt;
=== Performance ===&lt;br /&gt;
Original names for speeds were single-data rate (SDR), double-data rate (DDR) and quad-data rate (QDR) as given below.&amp;lt;ref name=&amp;quot;comeback&amp;quot; /&amp;gt; Subsequently, other three-letter initialisms were added for even higher data rates.&amp;lt;ref name=&amp;quot;fdr_fact_sheet&amp;quot;&amp;gt;{{Cite web |date=November 11, 2021 |title=FDR InfiniBand Fact Sheet |url=https://cw.infinibandta.org/document/dl/7260 |access-date=September 30, 2021 |publisher=InfiniBand Trade Association |archive-date=August 26, 2016 |archive-url=https://web.archive.org/web/20160826064526/https://cw.infinibandta.org/document/dl/7260 |url-status=dead }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ InfiniBand unidirectional data rates&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; |&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; |Year&amp;lt;ref name=&amp;quot;ccgrid11-ib-hse-23&amp;quot;&amp;gt;{{cite web |last=Panda |first=Dhabaleswar K. |author2=Sayantan Sur |date=2011 |title=Network Speed Acceleration with IB and HSE |url=http://www.ics.uci.edu/~ccgrid11/files/ccgrid11-ib-hse_last.pdf#page=23 |access-date=13 September 2014 |work=Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet |publisher=CCGrid 2011 |pages=23 |location=Newport Beach, CA, USA}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; rowspan=&amp;quot;2&amp;quot; |Line code&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; |Signaling rate (Gbit/s)&lt;br /&gt;
! colspan=&amp;quot;4&amp;quot; |[[Throughput]] (Gbit/s)&amp;lt;ref name=&amp;quot;ib_over&amp;quot;&amp;gt;{{Cite web |title=InfiniBand Roadmap: IBTA - InfiniBand Trade Association |url=http://www.infinibandta.org/content/pages.php?pg=technology_overview |url-status=dead |archive-url=https://web.archive.org/web/20110929111021/http://www.infinibandta.org/content/pages.php?pg=technology_overview |archive-date=2011-09-29 |access-date=2009-10-27}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; |Adapter latency (μs)&amp;lt;ref&amp;gt;{{cite web |author1=Oded Paz |title=InfiniBand Essentials Every HPC Expert Must Know |url=https://www.hpcadvisorycouncil.com/events/2014/swiss-workshop/presos/Day_1/1_Mellanox.pdf |publisher=Mellanox technologies |date=April 2014| archive-url=https://web.archive.org/web/20250511121333/https://www.hpcadvisorycouncil.com/events/2014/swiss-workshop/presos/Day_1/1_Mellanox.pdf| archive-date=2025-05-11| url-status=live}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!1x&lt;br /&gt;
!4x&lt;br /&gt;
!8x&lt;br /&gt;
!12x&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|SDR|Single Data Rate}}&lt;br /&gt;
|2001, 2003&lt;br /&gt;
| rowspan=&amp;quot;6&amp;quot; |[[Non-return-to-zero|NRZ]]&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |[[8b/10b encoding|8b/10b]]&amp;lt;ref&amp;gt;{{cite web |title=InfiniBand Types and Speeds |url=https://www.advancedclustering.com/act_kb/infiniband-types-speeds/}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|2.5&lt;br /&gt;
|2&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;8&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|16&lt;br /&gt;
|24&lt;br /&gt;
|5&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|DDR|Double data rate}}&lt;br /&gt;
|2005&lt;br /&gt;
|5&lt;br /&gt;
|4&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;16&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|32&lt;br /&gt;
|48&lt;br /&gt;
|2.5&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|QDR|Quad Data Rate}}&lt;br /&gt;
|2007&lt;br /&gt;
|10&lt;br /&gt;
|8&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;32&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|64&lt;br /&gt;
|96&lt;br /&gt;
|1.3&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|FDR10|Fourteen Data Rate, 10 Gbit/s per lane}}&lt;br /&gt;
|2011&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |[[64b/66b encoding|64b/66b]]&lt;br /&gt;
|10.3125&amp;lt;ref&amp;gt;{{Cite web |title=Interfaces |url=https://docs.nvidia.com/networking/display/SB77X0EDR/Interfaces |access-date=2023-11-12 |website=NVIDIA Docs |language=en |quote=FDR10 is a non-standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 10.3125&amp;amp;nbsp;Gbit/s with a 64b/66b encoding, resulting in an effective bandwidth of 40&amp;amp;nbsp;Gbit/s. FDR10 supports 20% more bandwidth over QDR due to better encoding rate.}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|10&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;40&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|80&lt;br /&gt;
|120&lt;br /&gt;
|0.7&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|FDR|Fourteen Data Rate}}&lt;br /&gt;
|2011&lt;br /&gt;
|14.0625&amp;lt;ref&amp;gt;{{Cite web |date=2018-04-29 |title=324-Port InfiniBand FDR SwitchX® Switch Platform Hardware User Manual |url=https://network.nvidia.com/pdf/user_manuals/SX6518_User_Manual.pdf |access-date=2023-11-12 |website=nVidia |at=section 1.2 |quote=InfiniBand FDR and FDR10 Overview [...] FDR, standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 14.0625&amp;amp;nbsp;Gbit/s with a 64b/66b encoding, resulting in an effective bandwidth of 54.54&amp;amp;nbsp;Gbit/s. The FDR physical layer is an IBTA specified physical layer using different block types, deskew mechanism and framing rules. The SX6518 switch also supports FDR10, a non-standard InfiniBand data rate, where each lane of a 4X port runs a bit rate of 10.3125&amp;amp;nbsp;Gbit/s with a 64b/66b encoding, resulting in an effective bandwidth of 40&amp;amp;nbsp;Gbit/s.}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;fdr_fact_sheet&amp;quot; /&amp;gt;&lt;br /&gt;
|13.64&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;54.54&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|109.08&lt;br /&gt;
|163.64&lt;br /&gt;
|0.7&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|EDR|Enhanced Data Rate}}&lt;br /&gt;
|2014&amp;lt;ref name=&amp;quot;ib_roadmap&amp;quot;&amp;gt;{{Cite web |title=InfiniBand Roadmap - Advancing InfiniBand |url=https://www.infinibandta.org/infiniband-roadmap/ |website=InfiniBand Trade Association |language=en-US}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|25.78125&lt;br /&gt;
|25&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;100&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|200&lt;br /&gt;
|300&lt;br /&gt;
|0.5&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|HDR|High Data Rate}}&lt;br /&gt;
|2018&amp;lt;ref name=&amp;quot;ib_roadmap&amp;quot; /&amp;gt;&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |[[Pulse-amplitude modulation|PAM4]]&lt;br /&gt;
| rowspan=&amp;quot;3&amp;quot; |256b/257b{{efn-lr|Using Reed-Solomon [[forward error correction]]}}&lt;br /&gt;
|53.125&amp;lt;ref&amp;gt;{{Cite web |title=Introduction |url=https://docs.nvidia.com/networking/display/ConnectX6VPI/Introduction |access-date=2023-11-12 |website=NVIDIA Docs |language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|50&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;200&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|400&lt;br /&gt;
|600&lt;br /&gt;
|&amp;lt;0.6&amp;lt;ref&amp;gt;{{cite web |title=ConnectX-6 vpi card - Product brief |url=https://network.nvidia.com/files/doc-2020/pb-connectx-6-vpi-card.pdf |publisher=Mellanox technologies |access-date=17 September 2025| archive-url=https://web.archive.org/web/20220412010630/https://network.nvidia.com/files/doc-2020/pb-connectx-6-vpi-card.pdf| archive-date=2022-04-12| url-status=dead}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|NDR|Next Data Rate}}&lt;br /&gt;
|2022&amp;lt;ref name=&amp;quot;ib_roadmap&amp;quot; /&amp;gt;&lt;br /&gt;
|106.25&amp;lt;ref&amp;gt;{{Cite web |title=Introduction |url=https://docs.nvidia.com/networking/display/ConnectX7VPI/Introduction |access-date=2023-11-12 |website=NVIDIA Docs |language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|100&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;400&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|800&lt;br /&gt;
|1200&lt;br /&gt;
|{{dunno}}&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|XDR|Extended Data Rate}}&lt;br /&gt;
|2024&amp;lt;ref&amp;gt;{{Cite web |title=NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure |url=http://nvidianews.nvidia.com/news/networking-switches-gpu-computing-ai |access-date=2024-03-19 |website=NVIDIA Newsroom |language=en-us}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|212.5&amp;lt;ref&amp;gt;{{Cite web |title=NVIDIA ConnectX-8 User Manual, &amp;quot;Introduction&amp;quot; |url=https://docs.nvidia.com/networking/display/connectx8SuperNIC/Introduction |access-date=2026-02-24 |website=NVIDIA Docs |language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
|200&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;800&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|1600&lt;br /&gt;
|2400&lt;br /&gt;
|{{dunno}}&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|GDR|t.b.d. Data Rate}}&lt;br /&gt;
|{{TBA}}&lt;br /&gt;
|{{TBD}}&lt;br /&gt;
|{{TBD}}&lt;br /&gt;
|~ &amp;#039;&amp;#039;425&amp;#039;&amp;#039; {{TBD}}&lt;br /&gt;
|400&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;1600&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|3200&lt;br /&gt;
|4800&lt;br /&gt;
|{{dunno}}&lt;br /&gt;
|-&lt;br /&gt;
!{{abbr|LDR|t.b.d. Data Rate}}&lt;br /&gt;
|{{TBA}}&lt;br /&gt;
|{{TBD}}&lt;br /&gt;
|{{TBD}}&lt;br /&gt;
|~ &amp;#039;&amp;#039;850&amp;#039;&amp;#039; {{TBD}}&lt;br /&gt;
|800&lt;br /&gt;
|&amp;#039;&amp;#039;&amp;#039;3200&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|6400&lt;br /&gt;
|9600&lt;br /&gt;
|{{dunno}}&lt;br /&gt;
|}&lt;br /&gt;
; Notes&lt;br /&gt;
{{Notelist-lr}}&lt;br /&gt;
&lt;br /&gt;
Each link is duplex. Links can be aggregated: most systems use a 4 link/lane connector ([[QSFP]]). HDR often makes use of 2x links (aka HDR100, 100&amp;amp;nbsp;Gb link using 2 lanes of HDR, while still using a QSFP connector). NDR introduced OSFP connectors which host one or two links at 2x (NDR200) or 4x (NDR400).  They are not logically configured as a single 8x link, even when connecting switches together with an OSFP cable.&lt;br /&gt;
&lt;br /&gt;
InfiniBand provides [[remote direct memory access]] (RDMA) capabilities for low CPU overhead.&lt;br /&gt;
&lt;br /&gt;
=== Topology ===&lt;br /&gt;
&lt;br /&gt;
InfiniBand uses a [[switched fabric]] topology, as opposed to early shared medium [[Ethernet]]. All transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or [[quality of service]] (QoS).&lt;br /&gt;
&lt;br /&gt;
=== Messages ===&lt;br /&gt;
&lt;br /&gt;
InfiniBand transmits data in packets of up to 4&amp;amp;nbsp;KB that are taken together to form a message. A message can be:&lt;br /&gt;
* a remote direct memory access read or write&lt;br /&gt;
* a [[Communication channel|channel]] send or receive&lt;br /&gt;
* a transaction-based operation (that can be reversed)&lt;br /&gt;
* a [[multicast]] transmission&lt;br /&gt;
* an [[atomic operation]]&lt;br /&gt;
&lt;br /&gt;
=== Physical interconnection ===&lt;br /&gt;
[[Image:Infinibandport.jpg|thumb|x100px|right|InfiniBand switch with CX4/SFF-8470 connectors]]&lt;br /&gt;
&lt;br /&gt;
In addition to a board form factor connection, it can use both active and passive copper (up to 10 meters) and [[optical fiber cable]] (up to 10&amp;amp;nbsp;km).&amp;lt;ref name=faq&amp;gt;{{cite web|title=Specification FAQ|url=http://www.infinibandta.org/content/pages.php?pg=technology_faq|publisher=ITA|access-date=30 July 2014|archive-url=https://web.archive.org/web/20161124000007/http://infinibandta.org/content/pages.php?pg=technology_faq|archive-date=24 November 2016|url-status=dead}}&amp;lt;/ref&amp;gt; &lt;br /&gt;
[[QSFP]] connectors are used.&lt;br /&gt;
&lt;br /&gt;
The InfiniBand Association also specified the [[CXP (connector)|CXP]] connector system for speeds up to 120&amp;amp;nbsp;Gbit/s over copper, active optical cables, and optical transceivers using parallel multi-mode fiber cables with 24-fiber MPO connectors.{{citation needed|date=August 2017}}&lt;br /&gt;
&lt;br /&gt;
=== Software interfaces ===&lt;br /&gt;
Mellanox operating system support is available for [[Solaris (operating system)|Solaris]], [[FreeBSD]],&amp;lt;ref&amp;gt;{{cite web|title=Mellanox OFED for FreeBSD|url=http://www.mellanox.com/page/products_dyn?product_family=193|publisher=Mellanox|access-date=19 September 2018}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite web |author1=Mellanox Technologies |title=FreeBSD Kernel Interfaces Manual, mlx5en |url=https://www.freebsd.org/cgi/man.cgi?query=mlx5en |website=FreeBSD Man Pages |publisher=FreeBSD |access-date=19 September 2018 |language=en |date=3 December 2015}}&amp;lt;/ref&amp;gt; [[Red Hat Enterprise Linux]], [[SUSE Linux Enterprise Server]] (SLES), [[Windows (operating system)|Windows]], [[HP-UX]], [[VMware ESX]],&amp;lt;ref&amp;gt;{{cite web|title=InfiniBand Cards - Overview|url= http://www.mellanox.com/page/infiniband_cards_overview|publisher= Mellanox|access-date= 30 July 2014}}&amp;lt;/ref&amp;gt; and [[AIX]].&amp;lt;ref&amp;gt;{{cite web|title=Implementing InfiniBand on IBM System p (IBM Redbook SG24-7351-00)|url=http://www.redbooks.ibm.com/redbooks/pdfs/sg247351.pdf}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
InfiniBand has no specific standard [[application programming interface]] (API). The standard only lists a set of verbs such as &amp;lt;code&amp;gt;ibv_open_device&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;ibv_post_send&amp;lt;/code&amp;gt;, which are abstract representations of functions or methods that must exist. The syntax of these functions is left to the vendors. Sometimes for reference this is called the &amp;#039;&amp;#039;verbs&amp;#039;&amp;#039; API. The [[de facto standard]] software is developed by [[OpenFabrics Alliance]] and called the Open Fabrics Enterprise Distribution (OFED). It is released under two licenses [[GPL2]] or [[BSD license]] for Linux and FreeBSD, and as Mellanox OFED for Windows (product names: WinOF / WinOF-2; attributed as host controller driver for matching specific ConnectX 3 to 5 devices)&amp;lt;ref&amp;gt;[http://www.mellanox.com/page/products_dyn?product_family=32&amp;amp;menu_section=34 Mellanox OFED for Windows - WinOF / WinOF-2]&amp;lt;/ref&amp;gt; under a choice of BSD license for Windows. &lt;br /&gt;
It has been adopted by most of the InfiniBand vendors, for [[Linux]], [[FreeBSD]], and [[Microsoft Windows]]. [[IBM]] refers to a software library called &amp;lt;code&amp;gt;libibverbs&amp;lt;/code&amp;gt;, for its [[AIX]] operating system, as well as &amp;quot;AIX InfiniBand verbs&amp;quot;.&amp;lt;ref&amp;gt;{{Cite web |title= Verbs API |work= IBM AIX 7.1 documentation |url= https://www.ibm.com/support/knowledgecenter/en/ssw_aix_71/com.ibm.aix.rdma/verbs_API.htm |date= 2020 |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
The Linux kernel support was integrated in 2005 into the kernel version 2.6.11.&amp;lt;ref&amp;gt;{{Cite web |title= Verbs programming tutorial |date= March 11, 2014 |author= Dotan Barak |publisher= Mellanox |work= OpenSHEM, 2014 |url= https://www.csm.ornl.gov/workshops/openshmem2014/documents/presentations_and_tutorials/Tutorials/Verbs%20programming%20tutorial-final.pdf |access-date= September 26, 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ethernet over InfiniBand ===&lt;br /&gt;
&lt;br /&gt;
Ethernet over InfiniBand, abbreviated to EoIB, is an Ethernet implementation over the InfiniBand protocol and connector technology.&lt;br /&gt;
EoIB enables multiple Ethernet [[Bandwidth (computing)|bandwidths]] varying on the InfiniBand (IB) version.&amp;lt;ref&amp;gt;{{cite web|title=10 Advantages of InfiniBand | url=https://www.naddod.com/blog/top-10-advantages-of-infiniband |website=NADDOD|access-date=January 28, 2023}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
Ethernet&amp;#039;s implementation of the [[Internet Protocol Suite]], usually referred to as TCP/IP, is different in some details compared to the direct InfiniBand protocol in IP over IB (IPoIB).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot; &lt;br /&gt;
|-&lt;br /&gt;
|+ Ethernet over InfiniBand performance&lt;br /&gt;
! Type !! Lanes !! Bandwidth (Gbit/s) !! Compatible Ethernet type(s) !! Compatible Ethernet quantity&lt;br /&gt;
|-&lt;br /&gt;
! rowspan=&amp;quot;4&amp;quot; | SDR &lt;br /&gt;
| {{0|00}}1 || {{0|000}}2.5 || GbE to 2.5 GbE || {{0}}2 × GbE to 1 × {{0}}2.5 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0|00}}4 || {{0|00}}10 || GbE to 10 GbE || 10 × GbE to 1 × 10 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0|00}}8 || {{0|00}}20 || GbE to 10 GbE || 20 × GbE to 2 × 10 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0}}12 || {{0|00}}30 || GbE to 25 GbE || 30 × GbE to 1 × 25 GbE + 1 × {{0}}5 GbE&lt;br /&gt;
|-&lt;br /&gt;
! rowspan=&amp;quot;4&amp;quot; | DDR &lt;br /&gt;
| {{0|00}}1 || {{0|000}}5 || GbE to 5 GbE || {{0}}5 × GbE to 1 × {{0}}5 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0|00}}4 || {{0|00}}20 || GbE to 10 GbE || 20 × GbE to 2 × 10 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0|00}}8 || {{0|00}}40 || GbE to 40 GbE || 40 × GbE to 1 × 40 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0}}12 || {{0|00}}60 || GbE to 50 GbE || 60 × GbE to 1 × 50 GbE + 1 × 10 GbE&lt;br /&gt;
|-&lt;br /&gt;
! rowspan=&amp;quot;2&amp;quot; | QDR &lt;br /&gt;
| {{0|00}}1 || {{0|00}}10 || GbE to 10 GbE || 10 × GbE to 1 × 10 GbE&lt;br /&gt;
|-&lt;br /&gt;
| {{0|00}}4 || {{0|00}}40 || GbE to 40 GbE || 40 × GbE to 1 × 40 GbE&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
&lt;br /&gt;
* [[100 Gigabit Ethernet]]&lt;br /&gt;
* [[iSCSI Extensions for RDMA]]&lt;br /&gt;
* [[iWARP]]&lt;br /&gt;
* [[List of interface bit rates]]&lt;br /&gt;
* [[Optical communication]]&lt;br /&gt;
* [[Parallel optical interface]]&lt;br /&gt;
* [[SCSI RDMA Protocol]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
{{Reflist|30em}}&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* {{Citation | arxiv = 1105.1827 | title = Dissecting a Small InfiniBand Application Using the Verbs API | bibcode = 2011arXiv1105.1827K | last1 = Kerr | first1 = Gregory | year = 2011 }}&lt;br /&gt;
* [http://www.infinibandta.org/ InfiniBand Trade Association web site]&lt;br /&gt;
&lt;br /&gt;
{{Computer-bus}}&lt;br /&gt;
{{Authority control}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Serial buses]]&lt;br /&gt;
[[Category:Computer buses]]&lt;br /&gt;
[[Category:Supercomputing]]&lt;br /&gt;
[[Category:Computer networks]]&lt;/div&gt;</summary>
		<author><name>RS-485</name></author>
	</entry>
</feed>