MIT Photonic AI Chip Is 100x Faster Than Digital Processors for 6G Signal Processing

Image Credit: Compare Fibre | Splash

Researchers at the Massachusetts Institute of Technology (MIT) have developed a photonic AI hardware accelerator designed to advance wireless signal processing for future 6G networks. This optical processor, which uses light to perform machine-learning tasks, offers significant speed and efficiency improvements over traditional electronic systems, potentially enabling real-time data analysis for edge devices like autonomous vehicles and medical monitors.

Breakthrough in AI Hardware

Detailed in a study published on June 11, 2025, in Science Advances, the photonic chip classifies wireless signals in approximately 120 nanoseconds. Unlike conventional electronic AI accelerators, it processes data using light, achieving speeds about 100 times faster than leading digital systems while maintaining roughly 95% accuracy in signal classification tasks with multiple measurements. Led by Professor Dirk Englund of MIT’s Department of Electrical Engineering and Computer Science, the chip employs a novel architecture called the Multiplicative Analog Frequency Transform Optical Neural Network (MAFT-ONN).

This technology addresses the increasing need for efficient spectrum management as devices, from smartphones to IoT systems, compete for wireless bandwidth. By handling complex machine-learning operations directly on analog signals, the chip supports applications like cognitive radios, which adapt transmission formats dynamically to optimize data flow in varying network conditions.

Why Photonics for 6G?

The anticipated rollout of 6G networks around 2030 will require systems capable of managing surging data volumes from cloud computing, remote work, and technologies like augmented reality. Traditional electronic processors face challenges with latency and energy consumption in edge computing, where data is processed locally. Photonics, using light instead of electricity, offers faster processing with lower energy demands, as photons travel quicker and generate less heat than electrons.

The MAFT-ONN processes signals in the frequency domain before digitization, integrating linear and nonlinear operations on a single chip. This eliminates energy-intensive analog-to-digital conversions, overcoming a key limitation of prior optical neural networks that relied on external electronic components.

Development and Challenges

The research team, including lead author Ronald Davis III, a recent MIT PhD graduate; Zaijun Chen, now an assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at MIT and senior scientist at NTT Research, adapted machine-learning algorithms to optical hardware. Using photoelectric multiplication, they enabled scalability, supporting up to 10,000 neurons per device without additional overhead. However, optical technologies are less mature than electronic systems, complicating integration with existing networks. While commercial foundry manufacturing holds promise for scalability, it may face complexities at larger scales. The chip’s reported 95% accuracy for signal classification may vary slightly due to hardware differences, and achieving consistent performance across varied applications remains a research focus.

Potential Impact and Applications

The chip’s speed and efficiency could enable edge devices to perform real-time AI tasks, such as autonomous vehicles reacting instantly to environmental changes or medical devices continuously monitoring health metrics. In telecommunications, it could enhance 6G network reliability by reducing latency. Its energy efficiency may also lower the environmental impact of data centers, which face growing power demands from AI applications. Beyond wireless communication, potential uses include lidar for navigation, astronomical data processing, and real-time medical diagnostics.

Pros and Cons

The chip offers superior speed, reduced power consumption, and potential cost savings through scalable manufacturing. However, integration with existing infrastructure poses technical challenges, and its accuracy, while high, may not meet the precision needs of all applications. Industry adoption could be slowed by the unfamiliarity of photonic systems among developers and manufacturers.

Future Trends

The MIT team plans to explore multiplexing to increase computational capacity and apply the technology to advanced AI models like transformers. Industry analysts predict photonic computing could grow into a multi-billion-dollar market by the 2030s, driven by 6G and AI demands, with initial products possibly emerging by 2027. Success will depend on overcoming manufacturing hurdles and developing optical-optimized algorithms.

Building on decades of photonic research, this chip addresses limitations in electronic systems as Moore’s Law slows. Supported by the U.S. Army Research Laboratory, U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and National Science Foundation, the project reflects collaboration with NTT Research and USC. It aligns with industry trends toward hybrid computing, combining optical and electronic systems for high-performance applications.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Genspark.AI Emerges in U.S. as AI Agent Startup Led by Former Baidu Executives

Next
Next

Biren Technology Raises $207M for AI Chips Ahead of HK IPO