<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://rs-485.com/index.php?action=history&amp;feed=atom&amp;title=Compute_Express_Link</id>
	<title>Compute Express Link - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://rs-485.com/index.php?action=history&amp;feed=atom&amp;title=Compute_Express_Link"/>
	<link rel="alternate" type="text/html" href="https://rs-485.com/index.php?title=Compute_Express_Link&amp;action=history"/>
	<updated>2026-05-04T09:00:22Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://rs-485.com/index.php?title=Compute_Express_Link&amp;diff=463&amp;oldid=prev</id>
		<title>RS-485: Imported from Wikipedia (overwrite)</title>
		<link rel="alternate" type="text/html" href="https://rs-485.com/index.php?title=Compute_Express_Link&amp;diff=463&amp;oldid=prev"/>
		<updated>2026-05-02T18:00:10Z</updated>

		<summary type="html">&lt;p&gt;Imported from Wikipedia (overwrite)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Open standard processor interconnection for data centers}}&lt;br /&gt;
&lt;br /&gt;
{{Infobox computer hardware bus&lt;br /&gt;
| name        = Compute Express Link&lt;br /&gt;
| image       = ComputeExpressLinkLogo.png&lt;br /&gt;
| caption     = &lt;br /&gt;
| invent-date = {{Start date and age|2019}}&lt;br /&gt;
| width       = &lt;br /&gt;
| speed       = Full duplex {{bulleted list|&amp;#039;&amp;#039;&amp;#039;1.x&amp;#039;&amp;#039;&amp;#039;-&amp;#039;&amp;#039;&amp;#039;2.0&amp;#039;&amp;#039;&amp;#039; (32 [[GT/s]]): {{ubl|style=margin-left:1.6em;|3.938 GB/s (×1)|63.015 GB/s (×16)}}|&amp;#039;&amp;#039;&amp;#039;3.x&amp;#039;&amp;#039;&amp;#039; (64 [[GT/s]]): {{ubl|style=margin-left:1.6em;|7.563 GB/s (×1)|121.0 GB/s (×16)}}|&amp;#039;&amp;#039;&amp;#039;4.0&amp;#039;&amp;#039;&amp;#039; (128 [[GT/s]]): {{ubl|style=margin-left:1.6em;|15.126 GB/s (×1)|242.0 GB/s (×16)}}}}&lt;br /&gt;
| numdev      = 4096&lt;br /&gt;
| style       = s&lt;br /&gt;
| hotplug     = &lt;br /&gt;
| external    = &lt;br /&gt;
| website     = {{URL|www.computeexpresslink.org}}&lt;br /&gt;
|invent-name=[[Intel]]&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Compute Express Link&amp;#039;&amp;#039;&amp;#039; (&amp;#039;&amp;#039;&amp;#039;CXL&amp;#039;&amp;#039;&amp;#039;) is an [[open standard]] interconnect for high-speed, high capacity [[central processing unit|CPU]]-to-device and CPU-to-memory connections, designed for high performance [[data center]] computers.&amp;lt;ref&amp;gt;{{Cite web |url=https://www.computeexpresslink.org/about-cxl |title=ABOUT CXL |website=Compute Express Link |language=en|access-date=2019-08-09}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |url=https://finance.yahoo.com/news/synopsys-delivers-industrys-first-compute-000000436.html |title=Synopsys Delivers Industry&amp;#039;s First Compute Express Link (CXL) IP Solution for Breakthrough Performance in Data-Intensive SoCs |website=finance.yahoo.com |publisher=[[Yahoo! Finance]] |access-date=2019-11-09}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |url=https://newsroom.intel.com/editorials/milestone-moving-data/ |title=A Milestone in Moving Data |website=Intel Newsroom |publisher=[[Intel]] |access-date=2019-11-09}}{{dead link|date=May 2025}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |url=https://www.businesswire.com/news/home/20190917005948/en/Compute-Express-Link-Consortium-CXL-Officially-Incorporates |title=Compute Express Link Consortium (CXL) Officially Incorporates; Announces Expanded Board of Directors |date=2019-09-17 |website=www.businesswire.com |publisher=[[Business Wire]] |language=en |access-date=2019-11-09}}&amp;lt;/ref&amp;gt; CXL is built on the [[Serial communication|serial]] [[PCI Express]] (PCIe) physical and electrical interface and includes PCIe-based block [[input/output]] protocol (CXL.io) and new [[cache coherence|cache-coherent]] protocols for accessing [[main memory|system memory]] (CXL.cache) and [[device memory]] (CXL.mem). The serial communication and [[Memory pool|pooling]] capabilities allows CXL memory to overcome performance and socket packaging limitations of common [[DIMM]] memory when implementing high storage capacities.&amp;lt;ref&amp;gt;{{Cite web |title=StackPath |url=https://www.electronicdesign.com/technologies/embedded-revolution/article/21176870/rambus-cxl-ushers-in-a-new-era-of-datacenter-architecture |access-date=2023-02-03 |website=www.electronicdesign.com|date=13 October 2021 }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |last=Mann |first=Tobias |date=2022-12-05 |title=Just How Bad Is CXL Memory Latency? |url=https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-memory-latency/ |access-date=2023-02-03 |website=The Next Platform |language=en-US}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== History ==&lt;br /&gt;
The CXL technology was primarily developed by [[Intel]]. The CXL Consortium was formed in March 2019 by founding members [[Alibaba Group]], [[Cisco Systems]], [[Dell EMC]], [[Meta Platforms|Meta]], [[Google]], [[Hewlett Packard Enterprise]] (HPE), [[Huawei]], [[Intel|Intel Corporation]] and [[Microsoft]],&amp;lt;ref name=CXL_announcement&amp;gt;{{Cite web|url=https://www.datacenterdynamics.com/en/news/intel-google-and-others-join-forces-cxl-interconnect/|title=Intel, Google and others join forces for CXL interconnect|first=Will|last=Calvert|date=March 13, 2019|website=www.datacenterdynamics.com}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=CXL_1_0/&amp;gt; and officially incorporated in September 2019.&amp;lt;ref name=CXL_Incorporation&amp;gt;{{Cite web|url=https://www.businesswire.com/news/home/20190917005948/en/Compute-Express-Link-Consortium-CXL-Officially-Incorporates-Announces-Expanded-Board-of-Directors|title=Compute Express Link Consortium (CXL) Officially Incorporates; Announces Expanded Board of Directors|date=September 17, 2019|website=www.businesswire.com}}&amp;lt;/ref&amp;gt; As of January 2022, [[Advanced Micro Devices|AMD]], [[Nvidia]], [[Samsung Electronics]] and [[Xilinx]] joined the founders on the board of directors, while [[Arm Ltd.|ARM]], [[Broadcom]], [[Ericsson]], [[IBM]], [[Keysight]], [[Kioxia]], [[Marvell Technology]], [[Mellanox Technologies|Mellanox]], [[Microchip Technology]], [[Micron Technology|Micron]], [[Oracle Corporation]], [[Qualcomm]], [[Rambus]], [[Renesas]], [[Seagate Technology|Seagate]], [[SK Hynix]], [[Synopsys]], and [[Western Digital]], among others, were  contributing members.&amp;lt;ref name=CXL_memberlist&amp;gt;{{Cite web |url=https://www.computeexpresslink.org/members |title=Compute Express Link: Our Members |author=&amp;lt;!-- Unstated --&amp;gt; |date=2020 |website=CXL Consortium |access-date=2020-09-25}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |url=https://community.amd.com/community/amd-business/blog/2019/07/18/amd-joins-consortia-to-advance-cxl-a-new-high-speed-interconnect-for-breakthrough-performance |title=AMD Joins Consortia to Advance CXL, a New High-Speed Interconnect for Breakthrough Performance |last=Papermaster |first=Mark |date=July 18, 2019 |website=Community.AMD |access-date=2020-09-25}}&amp;lt;/ref&amp;gt; Industry partners include the [[PCI-SIG]],&amp;lt;ref&amp;gt;{{cite web | url=https://www.computeexpresslink.org/post/cxl-consortium-and-pci-sig-announce-marketing-mou-agreement | title=CXL Consortium and PCI-SIG Announce Marketing MOU Agreement | date=23 September 2021 | access-date=18 January 2022 | archive-date=29 August 2023 | archive-url=https://web.archive.org/web/20230829103402/https://www.computeexpresslink.org/post/cxl-consortium-and-pci-sig-announce-marketing-mou-agreement | url-status=dead }}&amp;lt;/ref&amp;gt; [[Gen-Z (consortium)|Gen-Z]],&amp;lt;ref&amp;gt;{{cite web | url=https://www.computeexpresslink.org/industry-liaisons | title=Industry Liaisons | date=27 September 2023 }}&amp;lt;/ref&amp;gt; [[Storage Networking Industry Association|SNIA]],&amp;lt;ref&amp;gt;{{cite web | url=https://www.computeexpresslink.org/post/snia-and-cxl-consortium-form-strategic-alliance | title=SNIA and CXL Consortium Form Strategic Alliance | date=3 November 2020 | access-date=16 January 2022 | archive-date=16 January 2022 | archive-url=https://web.archive.org/web/20220116192740/https://www.computeexpresslink.org/post/snia-and-cxl-consortium-form-strategic-alliance | url-status=dead }}&amp;lt;/ref&amp;gt; and [[Distributed Management Task Force|DMTF]].&amp;lt;ref&amp;gt;{{cite web | url=https://www.computeexpresslink.org/post/dmtf-and-cxl-consortium-establish-work-register | title=DMTF and CXL Consortium Establish Work Register | date=14 April 2020 | access-date=16 January 2022 | archive-date=29 August 2023 | archive-url=https://web.archive.org/web/20230829104903/https://www.computeexpresslink.org/post/dmtf-and-cxl-consortium-establish-work-register | url-status=dead }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On April 2, 2020, the Compute Express Link and [[Gen-Z (consortium)|Gen-Z]] Consortiums announced plans to implement interoperability between the two technologies,&amp;lt;ref&amp;gt;{{Cite news |title=CXL Consortium and Gen-Z Consortium Announce MOU Agreement |author=&amp;lt;!-- Unstated --&amp;gt; |date=April 2, 2020 |url=https://b373eaf2-67af-4a29-b28c-3aae9e644f30.filesusr.com/ugd/0c1418_efb1cff3f41d486ea85d50ec638ea715.pdf |place=Beaverton, Oregon |access-date=September 25, 2020}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite news |title=CXL Consortium and Gen-Z Consortium Announce MOU Agreement |author=&amp;lt;!-- Unstated --&amp;gt; |date=April 2, 2020 |url=https://genzconsortium.org/cxl-consortium-and-gen-z-consortium-announce-mou-agreement/ |archive-url=https://web.archive.org/web/20200411020243/https://genzconsortium.org/cxl-consortium-and-gen-z-consortium-announce-mou-agreement/ |url-status=usurped |archive-date=April 11, 2020 |access-date=April 11, 2020}}&amp;lt;/ref&amp;gt; with initial results presented in January 2021.&amp;lt;ref&amp;gt;{{cite web | url=https://www.computeexpresslink.org/post/cxl-consortium-and-gen-z-consortium-mou-update-a-path-to-protocol | title=CXL™ Consortium and Gen-Z Consortium™ MoU Update: A Path to Protocol | date=24 June 2021 | access-date=18 January 2022 | archive-date=18 January 2022 | archive-url=https://web.archive.org/web/20220118203922/https://www.computeexpresslink.org/post/cxl-consortium-and-gen-z-consortium-mou-update-a-path-to-protocol | url-status=dead }}&amp;lt;/ref&amp;gt; On November 10, 2021, Gen-Z specifications and assets were transferred to CXL, to focus on developing a single industry standard.&amp;lt;ref&amp;gt;{{Cite web|url=https://www.computeexpresslink.org/post/exploring-the-future-cxl-consortium-gen-z-consortium|title=Exploring the Future|first=C. X. L.|last=Consortium|date=November 10, 2021|website=Compute Express Link|access-date=December 1, 2021|archive-date=December 1, 2021|archive-url=https://web.archive.org/web/20211201212922/https://www.computeexpresslink.org/post/exploring-the-future-cxl-consortium-gen-z-consortium|url-status=dead}}&amp;lt;/ref&amp;gt; At the time of this announcement, 70% of Gen-Z members already joined the CXL Consortium.&amp;lt;ref name=eetimes_CXL_absorbs_Gen-Z&amp;gt;{{cite web | url=https://www.eetimes.com/cxl-will-absorb-gen-z/ | title=CXL Will Absorb Gen-Z | date=9 December 2021 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On August 1, 2022, [[OpenCAPI]] specifications and assets were transferred to the CXL Consortium,&amp;lt;ref&amp;gt;[https://web.archive.org/web/20220801224008/https://www.anandtech.com/show/17519/opencapi-to-fold-into-cxl OpenCAPI to Fold into CXL - CXL Set to Become Dominant CPU Interconnect Standard]&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[https://web.archive.org/web/20220802125039/https://www.computeexpresslink.org/_files/ugd/0c1418_d3474155dc6e4929aa2a5658a894d1a6.pdf CXL Consortium and OpenCAPI Consortium Sign Letter of Intent to Transfer OpenCAPI Specifications to CXL]&amp;lt;/ref&amp;gt; which now includes companies behind memory coherent interconnect technologies such as OpenCAPI (IBM), Gen-Z (HPE), and CCIX (Xilinx) open standards, and proprietary [[InfiniBand]] / [[RDMA over Converged Ethernet|RoCE]] (Mellanox), [[Infinity Fabric]] (AMD), [[Omni-Path]] and [[Intel QuickPath Interconnect|QuickPath]]/[[Intel Ultra Path Interconnect|Ultra Path]] (Intel), and [[NVLink|NVLink/NVSwitch]] (Nvidia) protocols.&amp;lt;ref name=Nextplatform_CXL_Gen-Z&amp;gt;{{Cite web|url=https://www.nextplatform.com/2021/11/23/finally-a-coherent-interconnect-strategy-cxl-absorbs-gen-z/|title=Finally, A Coherent Interconnect Strategy: CXL Absorbs Gen-Z|first=Timothy Prickett|last=Morgan|date=November 23, 2021|website=The Next Platform}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Specifications ===&lt;br /&gt;
On March 11, 2019, the CXL Specification 1.0 based on PCIe 5.0 was released.&amp;lt;ref name=CXL_1_0&amp;gt;{{Cite web |url=https://www.anandtech.com/show/14068/cxl-specification-1-released-new-industry-high-speed-interconnect-from-intel |archive-url=https://web.archive.org/web/20190311140105/https://www.anandtech.com/show/14068/cxl-specification-1-released-new-industry-high-speed-interconnect-from-intel |url-status=dead |archive-date=March 11, 2019 |title=CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel |last=Cutress |first=Ian |website=Anandtech |access-date=2019-08-09}}&amp;lt;/ref&amp;gt; It allows host CPU to access [[shared memory]] on accelerator devices with a cache coherent protocol. The CXL Specification 1.1 was released in June, 2019.&lt;br /&gt;
&lt;br /&gt;
On November 10, 2020, the CXL Specification 2.0 was released. The new version adds support for CXL switching, to allow connecting multiple CXL 1.x and 2.0 devices to a CXL 2.0 host processor, and/or pooling each device to multiple host processors, in [[distributed shared memory]] and [[disaggregated storage]] configurations; it also implements device integrity and data encryption.&amp;lt;ref name=Rambus_CXL_blog/&amp;gt; There is no bandwidth increase from CXL 1.x, because CXL 2.0 still utilizes PCIe 5.0 PHY.&lt;br /&gt;
&lt;br /&gt;
On August 2, 2022, the CXL Specification 3.0 was released, based on PCIe 6.0 physical interface and PAM-4 coding with double the bandwidth; new features include fabrics capabilities with multi-level switching and  multiple device types per port, and enhanced coherency with peer-to-peer DMA and memory sharing.&amp;lt;ref name=cxl3-anandtech&amp;gt;{{cite web | url=https://www.anandtech.com/show/17520/compute-express-link-cxl-30-announced-doubled-speeds-and-flexible-fabrics | archive-url=https://web.archive.org/web/20220802131029/https://www.anandtech.com/show/17520/compute-express-link-cxl-30-announced-doubled-speeds-and-flexible-fabrics | url-status=dead | archive-date=August 2, 2022 | title=Compute Express Link (CXL) 3.0 Announced: Doubled Speeds and Flexible Fabrics }}&amp;lt;/ref&amp;gt;&amp;lt;ref name=cxl3-tomshardware&amp;gt;{{cite web | url=https://www.tomshardware.com/news/cxl-30-debuts-one-cpu-interconnect-to-rule-them-all | title=Compute Express Link (CXL) 3.0 Debuts, Wins CPU Interconnect Wars | date=2 August 2022 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On November 14, 2023, the CXL Specification 3.1 was released.&lt;br /&gt;
&lt;br /&gt;
On December 3, 2024, the CXL Specification 3.2 was released.&lt;br /&gt;
&lt;br /&gt;
On November 18, 2025, the CXL Specification 4.0 was released.&lt;br /&gt;
&lt;br /&gt;
=== Implementations ===&lt;br /&gt;
On April 2, 2019, [[Intel]] announced their family of [[Intel FPGAs|Agilex FPGAs]] featuring CXL.&amp;lt;ref&amp;gt;{{Cite web |url=https://blogs.intel.com/psg/how-do-the-new-intel-agilex-fpga-family-and-the-cxl-coherent-interconnect-fabric-intersect/ |title=How do the new Intel Agilex FPGA family and the CXL coherent interconnect fabric intersect? |date=2019-05-03 |website=PSG@Intel |language=en-US |access-date=2019-08-09}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On May 11, 2021, [[Samsung]] announced a 128&amp;amp;nbsp;GB DDR5 based memory expansion module that allows for terabyte level memory expansion along with high performance for use in data centres and potentially next generation PCs.&amp;lt;ref&amp;gt;{{cite web |url=https://news.samsung.com/global/samsung-unveils-industry-first-memory-module-incorporating-new-cxl-interconnect-standard |title=Samsung Unveils Industry-First Memory Module Incorporating New CXL Interconnect Standard |date=2021-05-11 |website=Samsung |language=en-US |access-date=2021-05-11}}&amp;lt;/ref&amp;gt; An updated 512&amp;amp;nbsp;GB version based on a proprietary memory controller was released on May 10, 2022.&amp;lt;ref name=samsung512gb&amp;gt;{{cite web | url=https://news.samsung.com/global/samsung-electronics-introduces-industrys-first-512gb-cxl-memory-module | title=Samsung Electronics Introduces Industry&amp;#039;s First 512GB CXL Memory Module }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In 2021, CXL 1.1 support was announced for Intel [[Sapphire Rapids (microprocessor)|Sapphire Rapids]] processors&amp;lt;ref name=Intel_AD2021_press&amp;gt;{{Cite web|url=https://www.intel.com/content/www/us/en/newsroom/resources/press-kit-architecture-day-2021.html|title=Intel Architecture Day 2021|website=Intel |date=31 December 2021 }}&amp;lt;/ref&amp;gt; and AMD [[Zen 4]] [[EPYC]] &amp;quot;Genoa&amp;quot; and &amp;quot;Bergamo&amp;quot; processors.&amp;lt;ref name=AMD&amp;gt;{{Cite web|url=https://www.tomshardware.com/news/amd-unveils-zen-4-cpu-roadmap-96-core-5nm-genoa-128-core-begamo|title=AMD Unveils Zen 4 CPU Roadmap: 96-Core 5nm Genoa in 2022, 128-Core Bergamo in 2023|author1=Paul Alcorn|date=November 8, 2021|website=Tom&amp;#039;s Hardware}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL devices were shown at the [[ACM/IEEE Supercomputing Conference]] (SC21) by vendors including Intel,&amp;lt;ref name=Intel_SC21&amp;gt;{{Cite web |url= https://www.servethehome.com/intel-sapphire-rapids-cxl-emmitsburg-pch-sc21-astera-labs-synopsys/ |title= Intel Sapphire Rapids CXL with Emmitsburg PCH Shown at SC21 |date= December 7, 2021 |author= Patrick Kennedy |work= Serve the Home |access-date= November 18, 2022 }}&amp;lt;/ref&amp;gt; Astera, Rambus, Synopsys, Samsung, and [[LeCroy Corporation|Teledyne LeCroy]].&amp;lt;ref name=CXL_paces&amp;gt;{{cite web | url=https://www.eetimes.com/cxl-put-through-its-paces/ | title=CXL Put Through Its Paces | date= December 10, 2021 }}&amp;lt;/ref&amp;gt;&amp;lt;ref name=CXL_SC21&amp;gt;{{Cite web|url=https://www.hpcwire.com/off-the-wire/cxl-consortium-showcases-first-public-demonstrations-of-compute-express-link-technology-at-sc21/|title=CXL Consortium Showcases First Public Demonstrations of Compute Express Link Technology at SC21|website=HPCwire}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=CXL_SC21_splash&amp;gt;{{Cite web|url=https://www.computeexpresslink.org/post/cxl-consortium-makes-a-splash-at-supercomputing-2021-sc21|title=CXL Consortium Makes a Splash at Supercomputing 2021 (SC21)|first=C. X. L.|last=Consortium|date=December 16, 2021|website=Compute Express Link|access-date=January 13, 2022|archive-date=January 13, 2022|archive-url=https://web.archive.org/web/20220113232917/https://www.computeexpresslink.org/post/cxl-consortium-makes-a-splash-at-supercomputing-2021-sc21|url-status=dead}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Protocols ==&lt;br /&gt;
The CXL transaction layer is composed of three dynamically multiplexed (they change according to demand) &amp;#039;&amp;#039;sub-protocols&amp;#039;&amp;#039; on a single link:&amp;lt;ref&amp;gt;{{Cite web |date=2019-09-23 |title=Introduction to Compute Express Link (CXL): The CPU-To-Device Interconnect Breakthrough - Compute Express Link |url=https://computeexpresslink.org/blog/introduction-to-compute-express-link-cxl-the-cpu-to-device-interconnect-breakthrough-2313/ |access-date=2024-07-16 |website=computeexpresslink.org |language=en-US}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=Synopsys_CXL_bulletin&amp;gt;{{Cite web|url=https://www.synopsys.com/designware-ip/technical-bulletin/compute-express-link-standard-2019q3.html|title=Compute Express Link Standard &amp;amp;#124; DesignWare IP &amp;amp;#124; Synopsys|website=www.synopsys.com}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=Rambus_CXL_blog&amp;gt;{{Cite web|url=https://www.rambus.com/blogs/compute-express-link/|title=Compute Express Link (CXL): All you need to know|website=Rambus}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;CXL.io&amp;#039;&amp;#039;&amp;#039; – based on PCIe 5.0 (and PCIe 6.0 after CXL 3.0) with a few enhancements, it provides configuration, link initialization and management, device discovery and enumeration, interrupts, DMA, and register I/O access using non-coherent loads/stores. &amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{Cite AV media |url=https://www.youtube.com/watch?v=HPpQLGIxZWM |title=Introduction to Compute Express Link™ (CXL™) Technology |date=2021-04-02 |last=CXL Consortium |access-date=2024-07-16 |via=YouTube}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;CXL.cache&amp;#039;&amp;#039;&amp;#039; – defines interactions between a host and a device,&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; allows peripheral devices to coherently access and cache host CPU memory with a low latency request/response interface.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;CXL.mem&amp;#039;&amp;#039;&amp;#039; – allows host CPU to coherently access device-attached memory with load/store commands for both volatile (RAM) and persistent non-volatile (flash memory) storage.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL.cache and CXL.mem protocols operate with a common link/transaction layer, which is separate from the CXL.io protocol link and transaction layer. These protocols/layers are multiplexed together by an Arbitration and Multiplexing (ARB/MUX) block before being transported over standard PCIe 5.0 PHY using fixed-width 528 bit (66 byte) [[Flit (computer networking)|Flow Control Unit]] (FLIT) block consisting of four 16-byte data &amp;#039;slots&amp;#039; and a two-byte [[cyclic redundancy check]] (CRC) value.&amp;lt;ref name=Synopsys_CXL_bulletin/&amp;gt; CXL FLITs encapsulate PCIe standard Transaction Layer Packet (TLP) and Data Link&lt;br /&gt;
Layer Packet (DLLP) data with a variable frame size format.&amp;lt;ref name=CXL_FMS2019_slides&amp;gt;{{Cite web|url=https://www.computeexpresslink.org/post/introduction-to-compute-express-link-cxl-the-cpu-to-device-interconnect-breakthrough|title=Introduction to Compute Express Link (CXL): The CPU-To-Device Interconnect Breakthrough|first=C. X. L.|last=Consortium|date=September 23, 2019|website=Compute Express Link|access-date=January 13, 2022|archive-date=March 20, 2022|archive-url=https://web.archive.org/web/20220320080130/https://www.computeexpresslink.org/post/introduction-to-compute-express-link-cxl-the-cpu-to-device-interconnect-breakthrough|url-status=dead}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=FMS2019_Lender&amp;gt;https://www.flashmemorysummit.com/Proceedings2019/08-07-Wednesday/20190807_CTRL-202A-1_Lender.pdf {{Bare URL PDF|date=March 2022}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL 3.0 introduces 256-byte FLIT in PAM-4 transfer mode.&lt;br /&gt;
&lt;br /&gt;
== Device types ==&lt;br /&gt;
CXL is designed to support three primary device types:&amp;lt;ref name=Rambus_CXL_blog/&amp;gt;&lt;br /&gt;
* Type 1 (CXL.io and CXL.cache) – coherently access host memory, specialized accelerators (such as smart [[network interface card|NIC]], PGAS NIC, and NIC Atomics) with no local memory. Devices rely on coherent access to host CPU memory.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
* Type 2 (CXL.io, CXL.cache and CXL.mem) – coherently access host memory and device memory, general-purpose accelerators ([[graphics processing unit|GPU]], [[ASIC]] or [[FPGA]]) with high-performance [[GDDR]] or [[High Bandwidth Memory|HBM]] local memory. Devices can coherently access host CPU&amp;#039;s memory and/or provide coherent or non-coherent access to device local memory from the host CPU.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
* Type 3 (CXL.io and CXL.mem) – allow the host to access and manage attached device memory, memory expansion boards and persistent memory. Devices provide host CPU with low-latency access to local DRAM or byte-addressable non-volatile storage.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Type 2 devices implement two memory coherence modes, managed by device driver. In device bias mode, device directly accesses local memory, and no caching is performed by the CPU; in host bias mode, the host CPU&amp;#039;s cache controller handles all access to device memory. Coherence mode can be set individually for each 4&amp;amp;nbsp;KB page, stored in a translation table in local memory of Type 2 devices. Unlike other CPU-to-CPU memory coherency protocols, this arrangement only requires the host CPU memory controller to implement the cache agent; such asymmetric approach reduces implementation complexity and reduces latency.&amp;lt;ref name=Synopsys_CXL_bulletin/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL 2.0 added support for switching in tree-based device fabrics, allowing PCIe, CXL 1.1 and CXL 2.0 devices to form virtual hierarchies of single- and multi-logic devices that can be managed by multiple hosts.&amp;lt;ref name=cxl_1_1_difference&amp;gt;{{Cite web |title= CXL 1.1 vs CXL 2.0 – What&amp;#039;s the difference? |author= Danny Volkind and Elad Shlisberg |publisher= UnifabriX |date= June 15, 2022 |url= https://www.computeexpresslink.org/_files/ugd/0c1418_74c3afe48bf340cdbe59af75a88f2370.pdf |access-date= November 18, 2022 |archive-date= December 26, 2022 |archive-url= https://web.archive.org/web/20221226103240/https://www.computeexpresslink.org/_files/ugd/0c1418_74c3afe48bf340cdbe59af75a88f2370.pdf |url-status= dead }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL 3.0 replaced bias modes with enhanced coherency semantics, allowing Type 2 and Type 3 devices to back invalidate the data in the host cache when the device has made a change to the local memory. Enhanced coherency also  helps implement peer-to-peer transfers within a virtual hierarchy of devices in the same coherency domain. It also supports memory sharing of the same memory segment between multiple devices, as opposed to memory pooling where each device was assigned a separate segment.&amp;lt;ref name=cxl3_white_paper&amp;gt;https://www.computeexpresslink.org/_files/ugd/0c1418_a8713008916044ae9604405d10a7773b.pdf {{Webarchive|url=https://web.archive.org/web/20220808095033/https://www.computeexpresslink.org/_files/ugd/0c1418_a8713008916044ae9604405d10a7773b.pdf |date=2022-08-08 }} {{Bare URL PDF|date=August 2022}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CXL 3.0 allows multiple Type 1 and Type 2 devices per each CXL root port; it also adds multi-level switching, helping implement device fabrics with non-tree topologies like mesh, ring, or spline/leaf. Each node can be a host or a device of any type. Type 3 devices can implement Global Fabric Attached Memory (GFAM) mode, which connects a memory device to a switch node without requiring direct host connection. Devices and hosts use Port Based Routing (PBR) addressing mechanism that supports up to 4,096 nodes.&amp;lt;ref name=cxl3_white_paper/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Devices ==&lt;br /&gt;
In May 2022 the first 512&amp;amp;nbsp;GB devices became available with 4 times more storage than previous devices.&amp;lt;ref&amp;gt;{{cite press release |url=https://news.samsung.com/global/samsung-electronics-introduces-industrys-first-512gb-cxl-memory-module |title=Samsung Electronics Introduces Industry&amp;#039;s First 512GB CXL Memory Module |publisher=Samsung |date=May 10, 2022}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== OS support ==&lt;br /&gt;
* [[Windows Server 2022]] added native support for CXL memory devices.&amp;lt;ref&amp;gt;https://files.futurememorystorage.com/proceedings/2024/20240807_CXLT-201-1_Mills.pdf {{Bare URL PDF|date=July 2025}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Linux kernel]] 6.5 added support for CXL.&lt;br /&gt;
&lt;br /&gt;
== Latency ==&lt;br /&gt;
CXL memory controllers typically add about 200&amp;amp;nbsp;ns of latency.&amp;lt;ref&amp;gt;{{Cite web |last=Mann |first=Tobias |date=2022-12-05 |title=Just How Bad Is CXL Memory Latency? |url=https://www.nextplatform.com/2022/12/05/just-how-bad-is-cxl-memory-latency/ |access-date=2023-02-03 |website=The Next Platform |language=en-US}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Coherent Accelerator Processor Interface]] (CAPI)&lt;br /&gt;
* [[UCIe|Universal Chiplet Interconnect express]] (UCIe)&lt;br /&gt;
* [[Data processing unit]] (DPU)&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
* {{Official website|www.computeexpresslink.org}}&lt;br /&gt;
&lt;br /&gt;
{{Computer bus}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Computer-related introductions in 2019]]&lt;br /&gt;
[[Category:Peripheral Component Interconnect]]&lt;br /&gt;
[[Category:Serial buses]]&lt;br /&gt;
[[Category:Motherboard expansion slot]]&lt;/div&gt;</summary>
		<author><name>RS-485</name></author>
	</entry>
</feed>