Optical Computing — Theses

Theses: Optical Computing

Evolving beliefs with evidence. Confidence changes over time as new research arrives.

Thesis 1: Photonic interconnects will become the standard for AI data centers by 2028, replacing electrical interconnects for GPU-to-GPU communication

The AI training bandwidth wall is the most urgent infrastructure problem in computing. Lightmatter's 1.6 Tbps/fiber record and Ayar Labs' first UCIe-compliant optical chiplet represent the two things needed: raw performance and industry standardization. Nvidia backing both companies with strategic investment removes the adoption question — it becomes a matter of manufacturing readiness, not market demand.

Confidence: 7/10 Supporting evidence:

  • Lightmatter Passage L200: 1.6 Tbps/fiber, 200+ Tbps/package Evidence: strong (Lightmatter)
  • Ayar Labs: first UCIe-compliant optical chiplet, SuperNova 8 Tbps, 16-wavelength WDM Evidence: strong (Ayar Labs)
  • TSMC COUPE partnership makes optical I/O accessible to any TSMC customer Evidence: strong (Ayar Labs)
  • $500M Nvidia-backed raise signals strong market conviction Evidence: strong (Ayar Labs)
  • Deployment timeline crystallizing: early CPO adopters 2026, broader 800G/1.6T in 2027, standard by 2028 Evidence: moderate (Photonics Shift)

Challenging evidence:

  • Cost premium vs. pluggable optics at scale is still significant
  • Integration yield (combining optical and electronic fabrication) is below semiconductor norms
  • InP laser supply chain constraints could bottleneck scaling
  • "Standard" by 2028 means >50% of new AI cluster deployments — ambitious given lead times
  • Standardization across vendors (CW-WDM, CPO interfaces) is incomplete

Evolution:

  • Apr 5, 2026 — Initial thesis at 7/10. The Nvidia investment in both Lightmatter and Ayar Labs is the strongest market signal. UCIe compliance + TSMC partnership means the ecosystem is forming. Manufacturing readiness, not physics, is the constraint — and manufacturing problems are solvable with money and time.

Depends on: photonic-interconnects, photonic-neural-networks Would change if: InP supply chain proves more constrained than expected, or if electrical interconnect advances (e.g., next-gen NVLink) close the bandwidth gap enough to delay optical adoption.


Thesis 2: Photonic compute (not just interconnect) will achieve commercial viability for AI inference by 2030

The projected 880 TOPS/mm^2 compute density and 5.1 TOPS/W energy efficiency represent 1-3 orders of magnitude improvement over digital. Phase-change materials enabling non-volatile photonic weight storage make in-memory computing viable. The gap between lab and production is large, but the theoretical ceiling justifies the bet.

Confidence: 4/10 Supporting evidence:

Challenging evidence:

  • All projections, no production results — lab-to-fab gap is historically large for photonics
  • Bit precision with analog photonic weights is fundamentally limited
  • PCM write endurance (cycle count before degradation) is unknown at scale
  • O-E-O conversion for nonlinear activation functions remains a bottleneck
  • All-optical nonlinearities using saturable absorbers are immature
  • Fabrication yield at wafer scale is unproven
  • Hybrid optical-electronic may remain optimal, limiting the "photonic compute" narrative

Evolution:

  • Apr 5, 2026 — Initial thesis at 4/10. The theoretical performance is extraordinary but the open problems list is long and fundamental. This is a 2030 thesis because the 2026-2028 window is clearly too early. Commercial viability for inference (not training) is the more achievable target because inference workloads are more predictable.

Depends on: photonic-tensor-cores, photonic-neural-networks Would change if: A photonic compute chip demonstrates production-grade inference performance (not just projections), or if the nonlinearity bottleneck is solved with a practical all-optical approach.

Theses — Optical Computing | KB | MenFem