Integrated Neuromorphic Photonic Computing for AI Acceleration
PaperWang et al.Advanced Materials / WileyJune 1, 2025
Original SourceKey Contribution
Photonic tensor cores: 880 TOPS/mm2, 5.1 TOPS/W — 1-3 OOM over digital accelerators
Integrated Neuromorphic Photonic Computing for AI Acceleration
Abstract
Reviews emerging devices, network architectures, and future paradigms for neuromorphic photonic computing as AI accelerators. Demonstrates that in-memory-computing photonic tensor cores can achieve predicted compute density of 880 TOPS/mm² and efficiency of 5.1 TOPS/W.
Key Contributions
- Photonic tensor cores: 880 TOPS/mm² density, 5.1 TOPS/W efficiency (predicted)
- 1-3 orders of magnitude improvement over digital electronic accelerators in both density and efficiency
- MIT photonic chip: all key deep neural network computations optically on-chip, <0.5ns latency, >92% accuracy
- Monolithic coherent optical neural networks via commercial silicon photonics: 92.5% vowel classification, nanosecond latency, femtojoule efficiency
Results
The photonic approach shows dramatic advantages for matrix-multiply-heavy workloads (the backbone of neural networks). In-situ training enables networks to be trained directly on the photonic hardware. Key milestone: fully integrated photonic processor completing classification in less than half a nanosecond.
Limitations
- Predicted performance numbers not yet achieved at scale
- On-chip nonlinearity remains a challenge for deep architectures
- Manufacturing yield and reproducibility concerns
- Integration with existing electronic infrastructure requires co-design
Source: Neuromorphic Photonic Computing — Advanced Materials
Tags
neuromorphic-computingphotonic-tensor-coreai-acceleration