Non-Von Neumann Computing
Application-specific non-Von Neumann computing implemented in conventional electronic hardware can result in orders of magnitude reduction in energy efficiency for data-centric problems, often found in Machine Learning and Artificial Intelligence. This, in itself, justifies study of a hardware customization approach to the future of computing. Heterogeneous hardware that exploits new degrees of freedom in beyond-CMOS device design such as multi-level states, analog behavior, nonlinear dynamics, spin-wave logic, and phase change phenomena may lead to an even higher efficiency and intelligence level.
3D Heterogeneous Integration
To enable a paradigm shift in the computing hardware technology, future development in hardware foundations is likely to feature two critical approaches: i) 3D integration and ii) heterogeneous integration. First, instead of today’s chips, which build all the active devices on the wafer surface and only have passive interconnects in the upper layers, future computing chip fabrication will extend active device integration to the third dimension for enhanced component density per chip area and interconnection bandwidth. 3D integration is a natural way to increase the device density. More importantly, it’s the only way to enhance the device connectivity and reduce the communication loss, which are essential for complex neural network hardware. Heterogeneous integration will enable multi-functional chips that can replace the more cumbersome multi-chipsets being used today.
Secure and Private Computing
Secure and guaranteed private computing, which is of prime importance to a world dependent on trustworthy, secure, privacy-protecting distributed systems for commerce, defense, social networking, and entertainment. It is critical to adopt a holistic, system-level perspective that integrates human context, application, software, and hardware into a single, collaborative, unified research framework, and that treats privacy, security, and trustworthiness as first principles within the larger computing domain.
Graph Analytics and Machine Learning
It has become a promising approach to analyze big data in a graph format using data points as nodes and relationships as edges to determine pairwise relationship between two objects at a time and structural characteristics of the graph as a whole. Graph analytics has also been closely related to machine learning. For instance, it directly offers a unique set of unsupervised machine learning methods. Acceleration of graph analytics and machine learning at both algorithm and hardware levels is an important research task for the future of computing.
Approximate Computing
Approximate computing improves performance and efficiency by allowing some errors and inaccuracies for machine learning applications. Different from the brain, conventional computing does not allow errors because of the sequential processing of executing programs. While statistical machine learning applications tolerate certain levels of errors and inaccuracies, which can be used as a tradeoff for a better efficiency in the future of computing.