Edge Computing with optics is published in SCIENCE!

Learning on the edge”

Smart devices such as cell phones and sensors are low-power electronics operating on the edge of the internet. Although they are increasingly more powerful, they cannot perform complex machine learning tasks locally. Instead, such devices offload these tasks to the cloud, where they are performed by factory-sized servers in data centers, creating issues related to large power consumption, latency, and data privacy. Sludds et al. introduce an edge-computing architecture called NetCast that makes use of the strengths of photonics and electronics. In this method, smart transceivers periodically broadcast the weights of commonly used deep neural networks. The architecture allows low-power edge devices with minimal memory and processing to compute at teraflop rates otherwise reserved for high-power cloud computers. —ISO
This work is in collaboration with Prof. Englund’s group MIT. For more details, please read it from here.
“Advanced machine learning models are currently impossible to run on edge devices such as smart sensors and unmanned aerial vehicles owing to constraints on power, processing, and memory. We introduce an approach to machine learning inference based on delocalized analog processing across networks. In this approach, named Netcast, cloud-based “smart transceivers” stream weight data to edge devices, enabling ultraefficient photonic inference. We demonstrate image recognition at ultralow optical energy of 40 attojoules per multiply (<1 photon per multiply) at 98.8% (93%) classification accuracy. We reproduce this performance in a Boston-area field trial over 86 kilometers of deployed optical fiber, wavelength multiplexed over 3 terahertz of optical bandwidth. Netcast allows milliwatt-class edge devices with minimal memory and processing to compute at teraFLOPS rates reserved for high-power (>100 watts) cloud computers.”