Skip to content

System Performance Prediction

Writing high-performance code requires significant expertise in the programming language, compiler optimizations, and hardware knowledge. This often leads to poor productivity and portability and is inconvenient for a non-programmer domain-specialist such as a Physicist. More desirable is a high-level language where the domain-specialist simply specifies the workload in terms of high-level operations, and the compiler identifies the best implementation fully utilizing the heterogeneous platform. For creating a compiler that supports productivity, portability, and performance simultaneously, it is crucial to predict the performance of various available implementations (variants) of the dominant operations (kernels) contained in the workload on various hardware to decide (a) which variant should be chosen for each kernel in the workload, and (b) on which hardware resource the variant should run. To enable the performance prediction, we propose lightweight augmented neural networks for arbitrary combinations of kernel-variant-hardware. A key innovation is utilizing the mathematical complexity of the kernels as a feature to achieve higher accuracy. These models are compact to reduce training time and allow fast inference during compile-time and run-time. Using models with less than 75 parameters, and only 250 training data instances, we are able to obtain accurate performance predictions, significantly outperforming traditional feed-forward neural networks on 48 kernel-variant-hardware combinations. We further demonstrate that our variant-selection approach can be used in Halide implementations to obtain up to 1.7x speedup over Halide auto-scheduler.

AREAS OF INTEREST:

Task Mapping, Heterogeneous Architecture, Reinforcement Learning, Lightweight Augmented Neural Network

 

RECENT PUBLICATIONS:

Disclaimer: The following papers may have copyright restrictions. Downloads will have to adhere to these restrictions. They may not be reposted without explicit permission from the copyright holder. Any opinions, findings, and conclusions or recommendations expressed in these materials are those of the author(s) and do not necessarily reflect the views of the sponsors including National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), and any other sponsors listed in the publications.

  1. Zhang, Naifeng; Srivastava, Ajitesh; Kannan, Rajgopal; Prasanna, Viktor K., GenMAT: A General-Purpose Machine Learning-driven Auto-Tuner for Heterogeneous Platforms, Workshop on Programming Environments for Heterogeneous Computing (PEHC), 2021 (Github repository)
  2. Srivastava, Ajitesh; Zhang, Naifeng; Kannan, Rajgopal; Prasanna, Viktor K., Towards High Performance, Portability, and Productivity: Lightweight Augmented Neural Networks for Performance Prediction, 27th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC), 2020 (Github repository)
  3. Wang, Ta-Yang; Srivastava, Ajitesh; Prasanna, Viktor, A Framework for Task Mapping onto Heterogeneous Platforms, IEEE High Performance Extreme Computing Conference (HPEC), 2020 (Github repository)
Skip to toolbar