Susquehanna Financial Group (SFG) analyst Christopher Rolland claims Intel has made the first proof-of-concept chip with on-die silicon photonics technology. The chip giant and other companies are already using photonics to speed up networking components, but this new implementation could be a real game changer.
The analyst explains chip-scale photonics could be one of the most important technologies of our generation, and writes it has the potential to mitigate Moore's Law. Intel's silicon chip prototype reportedly features a super high-speed optical interconnect between a Xeon server processor and an Altera FPGA, but commercialization is still three to five years away from us.
The technology could be the prime reason why Intel paid $14 billion to acquire Altera:
Chip-scale photonics could help explain Intel’s exuberance around Altera. By controlling the high-speed interconnect, you can make best-in-class CPU/FPGA combinations proprietary to Intel. Potentially, as a condition for the Altera/Intel merger, we believe that Xilinx may have asked for universal access to Intel’s QPI (Quick Path Interface, the current electrical interconnect between a CPU and FPGA) and we expect this to carry over to their UPI interconnect (in Purley). But this interconnect is something totally different, much faster, and proprietary to Intel.
So what can you achieve with chip-scale photonics? Rolland speculates it could be the next big step in improving the performance of computer chips. Making processors faster by hitting higher clockspeeds hit a wall over a decade ago, chip makers found ways around this but currently chip design is limited by the power envelope and the number of cores that can be put on a reasonably sized chip.
The new technology could revolutionize chip design by making it possible to create what Rolland refers to as a "macro-chip". Once you have a super-fast, low-power interconnect, you can connect several discrete components loosely together to create a very fast multi-chip module.
Currently, we are limited by: 1) the power envelope, and 2) the number of cores we can put on a reasonably sized chip. But if we have a super-fast, low power interconnect that rivaled on-die performance, we could build a “macro-chip” by connecting a bunch of discrete connected components together. For example, you could disaggregate L1 memory caches from the main die but make them quickly accessible through the interconnect. Beyond memory, these disaggregated components could include GPUs, CPUs, FPGAs, Xeon Phi, ASICs, etc. Below, you will find a diagram demonstrating the use of photonics to build a macrochip (sometimes referred to as a multichip module).
Rolland also offers some speculation on how the thing works:
To start, Intel grows a layer of indium phosphide on top of their wafers (the lasers won’t emit without this layer). They then manufacture both their Xeon server CPUs (and Altera FPGAs) in the traditional bulk CMOS process. Next, they design 64 (one for each bit) laser drivers into each die. We suspect these drivers are “in-plane lasers” (meaning the laser shoots from the edge of the chip), but acknowledge they could be VCSELs (Vertical Cavity Surface-Emitting Lasers – meaning the laser shoots out the top surface of the chip). Laser pulses then travel across a waveguide, which acts like miniature fiber optic cable, to a matching photonic detector designed into the FPGA die where the transmission is read. We are unsure about the exact architecture of the waveguides (Are they strands of fiber or channels? Materials? Protrude from just one edge of the die or all around the die?).