Home Technology Computer Vision MIT’s Optical AI Chip Could Revolutionize 6G at Light Speed

MIT’s Optical AI Chip Could Revolutionize 6G at Light Speed

0
MIT’s Optical AI Chip Could Revolutionize 6G at Light Speed
This image is an artist’s rendition of a new optical processor developed by MIT researchers that performs machine-learning computations at the rate of light. Wireless signals are classified in nanoseconds. Credit: Sampson Wilcox Research Laboratory of Electronics.

This chip allows deep learning to be performed at the speed of sound, allowing edge devices to perform real time data analysis.

As the number of connected devices increases, so does the need for more bandwidth to support activities such as teleworking and cloud computing. Managing the limited wireless spectrum that is shared by all users becomes increasingly difficult.

To combat this, engineers are turning towards artificial intelligence in order to manage the wireless spectrum on a dynamic basis, with the aim of reducing latency and improving performance. Most AI techniques used to classify and process wireless signals require a lot of power and can’t operate in real time. Researchers at MIT created a new AI accelerator designed specifically for wireless signal processing. This optical processor performs machine learning tasks at the speed and accuracy of light. It can classify wireless signals in nanoseconds.

This photonic chip is 100 times faster than digital alternatives, and achieves 95 percent accuracy in signal classification . It is also scalable, adaptable and can be used for a variety of high-performance computing tasks. The chip is also smaller, lighter, cheaper, and more efficient than traditional digital AI hardware. This technology is especially useful for future 6G wireless networks, such as cognitive-radios that increase data rates by adjusting wireless formats based on the real-time environment.

The hardware accelerator can speed up a wide range of applications, including signal processing, by allowing edge devices perform deep-learning computations real-time. This includes enabling autonomous cars to respond instantly to changes in the environment or allowing smart pacemakers monitor heart health continuously.

There are many applications which would be enabled by edge device that can analyze wireless signals. What we have presented in our paper may open up a wide range of possibilities for real-time, reliable AI inference. This work could have a significant impact,” says Dirk Englund. He is a professor at the MIT Department of Electrical Engineering and Computer Science and he is also MIT’s principal investigator for the Quantum Photonics and Artificial Intelligence Group and Research Laboratory of Electronics. He is joined by Ronald Davis III PhD ’24, Zaijun Chen (a former MIT Postdoc who is now a professor at the University of Southern California), and Ryan Hamerly (a visiting scientist and senior scientist at NTT Research) on the paper. The research was published Science Advances

Fast processing

Current digital AI accelerations for wireless signal processors work by converting a signal into an image, and then passing it through a deep learning model for classification. This method is accurate, but it requires a lot of computing power. It’s not suitable for applications that need fast, real-time response.

By encoding and analyzing data using light (which is also less energy-intensive than digital computing), optical systems can accelerate deep neuron networks. Researchers have struggled with how to maximize the performance and scalability of general-purpose optical networks for signal processing. The researchers have tackled this problem by developing an optical neural networks architecture that is specifically designed for signal processing. They call it a multiplicative digital frequency transform optical neuron network (MAFTONN).

By encoding the signal data and performing machine-learning operations in the frequency domain, the MAFT-ONN tackles the problem of scaling.

Researchers designed their optical network to perform both linear and nonlinear functions in parallel. Deep learning requires both types of operations.

This innovative design allows them to use only one MAFT-ONN per layer to create the entire optical network. Other methods require one device to represent each individual computing unit or “neuron”.

Davis explains that they can fit 10,000 neuron onto a single device, and perform the necessary multiplications all at once.

They achieve this by using a technique known as photoelectric multiplication which increases efficiency dramatically. They can also create an optical network that is easily scaled up by adding additional layers, without needing extra overhead.

Results within nanoseconds

The MAFT-ONN system takes a wireless input signal, processes it, and then passes the data along to the edge device for further operations. MAFT-ONN, for example, would allow a device to automatically determine the type of signal it is to extract the data that it carries by classifying the modulation.

The researchers had to overcome a major challenge when designing MAFT ONN: how to map machine-learning computations onto optical hardware.

We couldn’t simply use a machine-learning framework that we found on the shelf. Davis says that they had to customize the framework to fit the hardware, and exploit the physics to make it perform the computations.

When the team tested their architecture in simulations on signal classification, they achieved 85 percent accuracy with a single shot. This accuracy can quickly increase to more than 99 per cent accuracy by using multiple measurements. MAFT-ONN required only 120 nanoseconds for the entire process.

The longer you measure the more accurate you will be. Davis says that because MAFT-ONN calculates inferences within nanoseconds, it is not necessary to sacrifice speed in order to achieve greater accuracy.

While the latest digital radio frequency devices are able to perform machine-learning inferences in microseconds, optical systems can do it in picoseconds or nanoseconds.

The researchers are looking to use multiplexing schemes in the future so that they can perform more computations, and scale up MAFT-ONN. They also plan to expand their research into more complex deep-learning architectures, which could run transformer models or LLMs.

Reference : “RF-photonic Deep Learning Processor with Shannon-limited Data Movement” by Ronald DavisIII, Zaijun chen, Ryan Hamerly, and Dirk Englund 11 June 2025 Science Advances.
DOI: 10.1126/sciadv.adt3558

Funding: This work was funded, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation. Subscribe to SciTechDaily’s newsletter and never miss a breakthrough.

www.aiobserver.co

Exit mobile version