Lightmatter
companyphotonic-computinglightmattersilicon-photonicsai-acceleratortransformer-inferenceanalog-computing
Lightmatter
Type: Company (Photonic AI accelerator)
Lightmatter built the first photonic processor to run production-grade neural networks without modification — BERT, ResNet classification + segmentation, Atari deep RL — at accuracy approaching 32-bit floating-point digital systems. Hybrid photonic-electronic architecture: photonic tensor cores for matmul, electronic control and memory. Six 3D-integrated chips, ~1 million photonic components, 65.5 TOPS at 78W electrical + 1.6W optical, PyTorch/TensorFlow compatible.
Key Contributions
- Transformer inference without modification — unmodified BERT at near-FP32 accuracy. (Lightmatter)
- Production workloads demonstrated — CNN classification, CNN segmentation, Atari deep RL. (Lightmatter)
- 65.5 TOPS ABFP16 throughput at 78W electrical + 1.6W optical. (Lightmatter)
- 3D-integrated six-chip design with ~1M photonic components. (Lightmatter)
- PyTorch + TensorFlow compatible — existing ML code runs without porting. (Lightmatter)
Caveats
- Claims are from the Lightmatter blog post, not peer-reviewed; specific accuracy numbers per workload not disclosed.
- Chip is inference-only — no on-chip training.
- ~1M photonic components is an engineering feat; yield and reliability data not disclosed.
Mentioned In
- Photonic Neural Network — commercial anchor
Related Entities
- Nokia Bell Labs — Ashtiani on-chip training is the training counterpart