theregister.com

Lightmatter says it's ready to ship chip-to-chip optical highways as early as summer

Lightmatter this week unveiled a pair of silicon photonic interconnects designed to satiate the growing demand for chip-to-chip bandwidth associated with ever-denser AI deployments.

The first of these is an optical interposer called the Passage M1000, which the California biz expects to begin shipping later this summer, and targets XPUs — think GPU or AI accelerators — or extremely high-bandwidth multi-die-switches on the order of petabits per second of capacity. The tech, which pipes data directly in and out of chips using light, was talked up by Lightmatter at the Optical Fiber Conference, running this week in San Francisco.

Here's what the Passage M1000 chip looks like

Here's a rendering of what the Passage M1000 chip looks like, according to Lightmatter

If any of this sounds familiar, Lightmatter is one of many looking to photonics to overcome power and bandwidth limitations. At Nvidia's GPU Technology Conference, aka GTC, last month, Nv revealed a pair of photonic switches designed to cut down on the number of transceivers necessary to build out large AI clusters. Intel, Broadcom, and Ayar Labs have also demonstrated co-packaged optical I/O functionality with a variety of CPUs and XPUs.

Here's an explosed view of how Lightmatter's Passage interposer might be integrated into an XPU design.

Here's an exploded view of how Lightmatter's Passage interposer might be integrated into an XPU design. Lightmatter's Passage is the layer in the middle; the top layer is the compute logic; and the lower layer is the substrate and socket. The blue pulses represent electrical signals between the layers. Chip-to-chip light is beamed in and out of the package from the interposer. Source: Lightmatter

What sets Lightmatter's M1000 apart from the rest of the pack is that it's designed to function as an interposer that sits between the compute logic and the substrate. Multiple ASICs or GPU dies can be stacked on top of the Passage tile. All of these layers communicate electrically, and from the interposer, traffic destined to be exchanged between connected chip packages is transmitted optically between each package over a dense network of wave guides. Traffic headed off the package is routed over any number of the 256 fiber optic attach points which line its edge.

One of the biggest advantages of this approach, Lightmatter says, is that communications between chips aren't limited to a so-called beachhead within the processor package. Instead, with interposer designs, data can move vertically over the entire surface area of the chip resulting in a greater aggregate bandwidth.

For its first interposer, Lightmatter is sticking with a combination of 56 Gb/s NRZ modulation and wave division multiplexing with support for eight wavelengths per fiber, which conveniently works out to 56 GB/s of bandwidth.

In total Lightmatter claims each M1000 tile can support up to 14.25 TB/s second of aggregate bandwidth.

Following the debut of the M1000 interposer later this year, Lightmatter plans to bring a pair of smaller co-packaged optical designs to market in 2026.

The Passage L200 and and L200X are designed to fill the role of more traditional co-packaged optics and promise either 32 Tb/s or 64 Tb/s of bidirectional bandwidth, respectively. For comparison, Ayar Labs' next-gen photonics chips we looked at last year, boast up to 8 Tb/s.

Lightmatter's smaller L200 and L200X co-packaged optical chiplets promise 32Tbps and 64 Tbps of optical I/O when they arrive in 2026.

Lightmatter's smaller L200 and L200X co-packaged optical chiplets promise 32 Tb/s and 64 Tb/s of optical I/O when they arrive in 2026

From what we gather, the main difference between the L200 and L200X is the former is using 56 Gb/s NRZ, and the latter is using 112 Gb/s PAM4 SerDes.

Like the M1000, Lightmatter's L200-series use the same bandwidth-boosting 3D-packaging approach, and multiple chiplet stacks can be used to support off-package communications at speeds of more than 200 Tb/s.

According to Lightmatter, these chips incorporate a variety of technologies from Alphawave Semi including "low-pwer and low-latency UCIe and optics ready SerDes."

If you're not familiar, UCIe is an emerging interconnect standard not unlike PCIe or CXL designed to enable chiplets from multiple vendors to communicate with one another using a common language. ®

Bootnote

Speaking of photonic interconnects: The US Defense Advanced Research Projects Agency, aka DARPA, on Tuesday awarded AI chip startup Cerebras Systems and Canadian co-packaged optics vendor Ranovus a $45 million contract.

Under the contract, Cerebras will integrate Ranovus' CPO tech into its wafer-scale compute platform in order to support "real-time, high-fidelity simulations" and "large-scale AI workloads," Cerebras CEO Andrew Feldman said in a statement.

Read full news in source page