Optical Processors Light the Path to Warp-Factor Computing

image
Computing at the speed of light has been a long time coming, but a new generation of optical processors promise to be faster – and cooler – than electronic incumbents.

Data has been sent across wide-area networks as light pulses for decades, but optical (or photonic) computing has been slow to meet the challenges of moving data in the form of light at the processor level: photons have proved profoundly trickier to traffic than electrons. And while conventional data processing continued to get faster year-after-year, there seemed scant incentive for technologists to crack the optical conundrum.

Nowadays, however, it’s roundly acknowledged that compute performance gains with conventional processor architectures have arrived at an impasse. Worse, physical limits on the number of cores that can be crammed onto a conventional IC are being reached just as advanced applications in AI and quantum need more – much more – compute power for them to pay their way.

It’s possible, of course, to bend Moore’s Law to an extent with parallel processing, which divides up and runs computational tasks on multiples of microprocessors simultaneously. But parallel processing has its limits too, and can introduce extra layers of complexity to already highly complex workloads.

Furthermore, hard-pressed conventional, core-dense computing needs hundreds of terawatts of precious power and generates huge amounts of dirty heat. So when analysts like ReportLinker declare that optical processors would reduce the amount of power consumed in some critical applications by at least 50 per cent, the IT industry is bound to take notice. If this figure proves feasible, the possibility of optical-processor-based platforms would be highly attractive to data centre operators keen to offer private cloud customers the ultimate in compute power (that’s cleaner, too).

All good reasons, then, for the recent resurgence in all-optical computing innovation that has come from academic researchers, industry incumbents, and wannabe start-ups – each convinced that optical processing will have a major transformative impact on IT over the coming half-decade. The market opportunities are surely there, reckon the forecasters: according to analyst ReportLinker again (December 2020), the global photonic processor market is projected to be worth $20.13m (£14.29m) by 2025 – that’s an annual growth rate (CAGR) of 28.3 per cent over the forecast period.

While performance gains are often cited as the main driver in favour of optical processing, mitigation of power consumption and heat are also crucial factors, argue optical advocates. Saturation in Dennard Scaling – the ‘law’ that states that while transistors might get smaller, their power use does not – has caused the current-generation high-performance electronic ICs to hit cooling limits, says Nick Harris, CEO at Lightmatter, a start-up that announced its AI photonic processor in August 2020, followed two months later by Lightmatter  Passage – a wafer-scale, programmable photonic interconnect that allows arrays of heterogeneous chips to communicate.

“To continue to energise the growth in computing required to develop and execute state-of-art neural network models, AI applications require increased compute speed with energy efficiency,” Harris says. Optical computing is now the “only solution out there able to support the pace of AI innovation required to deliver on the investments being made”.

At optical co-processor specialist Optalysys, CEO and founder Dr Nick New agrees. He cites the example of OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), an autoregressive language model that uses deep learning to produce human-like text.

“GPT-3 has 175 billion parameters – more than two orders of magnitude more than its predecessor, GPT-2,” New explains. “Running models like this on dedicated hardware, such as Google’s TPU 3.0 AI ASIC, involves water cooling across each of its [285,000] 300W CPU cores. Without the adoption of fundamentally different data processing methods, we are left with the option of using more cores, more cooling – and ultimately, ever-more electrical power. That’s not viable.”

New believes the IT market must recognise that the relevance of Moore’s Law (the observation that the number of transistors in a dense integrated circuit doubles about every two years) has passed, and that photonic processing calls for different ways of understanding the provisioning of compute power. This is not to suggest, however, that developmental advances in optical/photonic processors will be law-less.

“Optical computing will be governed by multiple ‘laws’, covering not just the physical size of the modulators but also the efficiencies of electro-optical and analogue-digital signal conversion and photodetector sensitivity,” New predicts.

According to Lightmatter’s Harris, optical compute technologies are focused on the process of inference in the context of AI, with two core performance metrics: inferences-per-second (IPS) and inferences-per-second-per-watt (IPSW). He says: “A photonic compute law equivalent to Moore’s Law would state that IPS and IPSW will [henceforth] double every two years – and this indeed is a roadmap target for my team.”

Harris adds: “Optical device dimensions are unlikely to undergo significant reductions in the per-component area, but performance scaling for photonic computers is likely to scale significantly. This is because of the ability to increase the number of colours – i.e. wavelengths – in a photonic computer that can be simultaneously processed by the compute core, as well as clock frequency.”

Optalysys launched its Fourier Engine photonic co-processor, the FT:X 2000, in March 2019. It relies on Fourier Transform, a mathematical transform that decomposes functions depending on space or time into functions depending on spatial or temporal frequency. The term refers to both the frequency domain representation, as well as the mathematical operation that associates the frequency domain representation to a function of space or time.

“Optalysys’s integrated silicon photonic co-processor has optical circuits built on a single piece of silicon,” says the company’s New. “It uses the interference properties of light to perform the Fourier Transform at speeds and power consumption that were previously not possible.”

Lightmatter’s approach is based on what it calls a Programmable Nanophotonic Processor architecture, an optical processor implemented in silicon photonics that performs matrix transformations on light. This relies on a 2D array of Mach-Zehnder interferometers (MZIs) fabricated in a silicon photonics process. A Mach-Zehnder interferometer is a device used to determine the relative phase shift variations between two collimated light beams derived by splitting light from a single source.

“To implement an N-by-N matrix product, our approach requires N² MZIs – the same number of compute elements used by systolic MAC [Multiply-Accumulate] arrays,” Harris explains. “Mathematically, each MZI performs a 2-by-2 matrix-vector product. Together, the whole mesh of MZIs multiplies an N-by-N matrix by an N-element vector. Computation occurs as light travels from the input to the output of the MZI array within the time of flight for the optical signals of about 100 picoseconds – less than a single clock cycle of most computers.”

At the start of this year, an international research team from the Universities of Muenster, Oxford, Exeter, Pittsburgh, École Polytechnique Fédérale Lausanne, along with IBM Research Zurich, announced the development of accelerator IC architecture that combines integrated photonic devices with phase-change materials (PCMs) to deliver matrix-vector (MV) multiplications – calculations deemed imperative to AI and machine-learning applications.

The team developed an ‘integrated phase-change photonic co-processor’ – or photonics processing unit (PPU) for short. This is a new type of photonic Tensor Processing Unit (the AI accelerator ASIC developed by Google for neural network machine learning) that is capable of carrying-out multiple MV multiplications simultaneously and in parallel, using a chip-based frequency comb as a light source, along with wavelength division multiplexing (the multiplexing of multiple optical carrier signals onto a single optical fibre by using different wavelengths of laser light).

The matrix elements were stored using phase-change materials – the same material used for re-writable DVD and Blu-ray discs – making it possible to preserve matrix states without the need for an energy supply.

“In terms of differentiation, I’d say that the PPU provides much wider bandwidth than ‘mainstream’ photonics approaches that rely on manipulation of optical phase and require coherent light sources,” explains Professor C David Wright of the University of Exeter. “This approach uses non-volatile PCMs that enable our device to act as both a memory and processor simultaneously. The matrix elements are stored directly in the device that carries out the matrix vector multiplications – no separate memory is needed.”

In their experiments, the team used the PPU in a so-called convolutional neural network for the recognition of handwritten numbers and for image filtering. The PPU project claims to be the first to apply frequency combs in the field of artificial neural networks.
The PPU could have a wide range of applications, says Wright’s fellow team member Harish Bhaskaran, professor of applied nanomaterials at the University of Oxford. “For instance, it could quickly and efficiently process huge data sets used for medical diagnoses, such as those from CT, MRI and PET scanners.”

Encoding data with optical signals

In 2019, researchers at MIT’s Research Laboratory of Electronics began the development of optical accelerator chips for optical neural networks. Their prototypes performed with much more efficiency than electronic processors but relied on bulky optical components that would have limited their use to relatively small networks.

The same team have since described an optical accelerator follow-up based on more compact optical components and optical signal-processing techniques. It’s scalable to neural networks that are much larger than equivalent electronic processors can handle, is MIT’s claim.

Rather than relying on matrix multiplication using Mach Zehnder interferometers (which, the MIT says, impose scaling limits), the processor relies on a more compact, energy-efficient optoelectronic scheme that encodes data with optical signals, but uses ‘balanced homodyne detection’.

Seeking single low-noise photons

Another research team at the University of Bristol’s Quantum Engineering Technology Labs has focused on the potential of optical processing for quantum computing applications. “A critical challenge that has limited the scaling of integrated quantum photonics has been the lack of on-processor sources able to generate high-quality single photons,” project lead Dr Stefano Paesani explains.

“Without low-noise photon sources, errors in a quantum computation accumulate rapidly when the circuit complexity [is increased] – resulting in the computation being no longer reliable. Moreover, optical losses in the sources limit the number of photons the quantum computer can produce and process.”

The Bristol team – in partnership with Italy’s University of Trento – has developed a technique to resolve this and in doing so has developed what it has claimed as the first integrated photon source compatible with large-scale quantum photonics: a technique called ‘inter-modal spontaneous four-wave mixing’. In this, the multiple modes of light propagating through a silicon waveguide are non-linearly interfered.

“This creates ideal conditions for generating single photons,” says Dr Paesani. “Furthermore, the device was fabricated via CMOS-compatible processes in a commercial silicon foundry, which means thousands of sources can easily be integrated on a single device.”

Nanocavities act as nano-switches

In Japan in 2019, a research team at NTT Basic Research Laboratories built a PAXEL (Photonic AccELerator), an electro-optic modulator that runs at 40Gbps, but consumes just 42 attojoules-per-bit. Next, the researchers constructed a photoreceiver (optical-to-electrical or O-E converter) based on the same technologies, and that was able to run at 10Gbps using two orders of magnitude less power than other optical systems at just 1.6 femtojoules-per-bit. The O-E does not require an amplifier (which reduces power needs) and has a low capacitance at just a few femtofarads.

Combining the E-O and O-E, the NTT team demonstrated what they claimed as the world’s first ‘O-E-O transistor’. It can function as an all-optical switch, a wavelength converter, and a repeater. This development was enabled by the invention of a new type of photonic ‘crystal’ (a synthetic insulating material that controls light).

This is a piece of silicon with three drilled nanocavities (holes) with a length of 1.3µm that are arranged so that if the light goes through them it interferes with itself, causing it to cancel out. If a line of nanocavities is blocked, then the light follows the path and is funnelled into light-absorbing material that converts it into a current. The same system also works in reverse.

Share this article ...

Our Mission

At The Qubit Report, our mission is to promote knowledge and opinion of quantum computing from the casual reader to the scientifically astute.  Because Quantum is Coming.

Einstein Stroll