Skip to content

Machine Learning Algorithm & Hardware Co-design

Our current machine learning (ML) projects focus on training energy-efficient neural networks, including CNNs, SNNs, RNNs, and Self-Attention Models, and co-designing associated hardware implementations. We study ML applications in various computer vision tasks that include 2D and 3D image classification, object detection, and tracking as well as private inference. Our group regularly publishes in top-tier computer vision, machine learning, computer architecture, design automation conferences and journals that focus on the boundary between hardware and algorithms.  We are part of the Hardware Accelerated Learning group at USC and collaborate with ISI’s ASIC group on new devices and circuits for machine learning.

List of group members (Ph.D.s): Souvik Kundu (grad. 2022), Gourav Datta (grad. 2023), Yuke Zhan, Xuan Zhou, Robert Aviles, and Sreetama Sarkar

List of publications:

      1. [ICLR 2024] Z. Liu*, G. Datta*, A. Li, P. A. Beerel. “LMUFormer: Low Complexity Yet Powerful Spiking Model With Legendre Memory Units” (accepted)
      2. [ICLR 2024] G. Datta, Z. Liu, P. A. Beerel. “Bridging the Gap between Binary Neural Networks and Spiking Neural Networks for Efficient Computer Vision”  (accepted)
      3. [ICASSP 2024] S. Kundu, R.-J. Zhu, A. Jaiswal, P. A. Beerel. Recent Advances in Scalable Energy-Efficient and Trustworthy Spiking Neural Networks: From Algorithms to Technology (special session paper, accepted)
      4. [ICASSP 2024] G. Datta, Z. Liu, P. A. Beerel.  “Training Ultra-Low-Latency Spiking Neural Networks from Scratch” (special session paper accepted)
      5. [ICASSP 2024] C. Li, D. Chen, Y. Zhang, P. A. Beerel. “Mitigate Replication and Copying in Diffusion Models with Generalized Caption and Dual Fusion Enhancement” (accepted)
      6. [GOMACTech 2024] M. Kaiser, G. Datta, P. A. Beerel, A. Jacob, A. Jaiswal. “Neuromorphic-P2M: Processing-in-Pixel-in-Memory Paradigm for Neuromorphic Image Sensors”
      7. [ICRC 2023] Z. Yin. G. Datta. M. Kaiser, P. A. Beerel, A. Jacob, A. Jaiswal. ” Design Considerations for 3D Heterogeneous Integration Driven Analog Processing-in-Pixel for Extreme-Edge Intelligence”
      8. [ICCAD 2023] Y. Zhang, D. Chen, S. Kundu, C. Li, P. A. Beerel. “RNA-ViT: Reduced-Dimension Approximate Normalized Attention Vision Transformers for Latency Efficient Private Inference”
      9. [ICCV 2023] Y. Zhang*, D. Chen*, S. Kundu*, C. Li, P. A. Beerel.  “SAL-ViT: Towards Latency Efficient Private Inference on ViT using Selective Attention Search with a Learnable Softmax Approximation”
      10. [ISPLED 2023] G. Datta, H. Deng, R. Aviles, Z. Liu, P. A. Beerel. “Bridging the Gap between Spiking Neural Networks & LSTMs for Latency & Energy Efficiency.
      11. [DAC 2023] Y. Zhang, D. Chen, S. Kundu, H. Liu, R. Peng, P. A. Beerel. “C2PI: An Efficient Crypto-Clear Two-Party Neural Network Private Inference”
      12. [CVPR Workshop on Efficient Deep Learning for Computer Vision 2023] S. Kundu, Y. Zhang, D. Chen, P. A. Beerel. “Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference” (oral presentation)
      13. [GLSVLSI 2023] M. Kaiser, G. Datta, S. Sarkar, S. Kundu, Z. Yin, M. Garg, A. P Jacob, P. A. Beerel, A. R. Jaiswal. “Technology-Circuit-Algorithm Tri-Design for Processing-in-Pixel-in-Memory (P2M)” (invited paper)
      14. [Frontiers in Neuroinformatics 2023] M. Kaiser, G. Datta, Z. Wang, A. Jacob, P. A Beerel, A. Jaiswal. “Neuromorphic-P2M: Processing-in-Pixel-in-Memory Paradigm for Neuromorphic Image Sensors” (accepted)
      15. [ICLR 2023] S. Kundu, J. Liu; S. Lu; P. A. Beerel. “Learning to Linearize Deep Neural Networks for Secure and Efficient Private Inference”
      16. [WACV 2023] G. Datta, Z. Liu, Z. Lin, A. Jaiswal, and P. A. Beerel. “Enabling ISP-less Low-Power Computer Vision”
      17. [WACV 2023] F. Chen*, G. Datta*, S. Kundu, and P. A. Beerel. “Self-Attentive Pooling for Efficient Deep Learning”
      18. [WACV 2023] S. Kundu, S. Sundaresan, M. Pedram, P. A. Beerel. “FLOAT: Fast Learnable Once-for-All Adversarial Training for Tunable Trade-off between Accuracy and Robustness”
      19. [ICASSP 2023] S. Kundu, S. Sundaresan, S. N. Sridhar, S. Lu, H. Tang, P. A. Beerel “Sparse mixture once-for-all adversarial training for efficient in-situ trade-off between accuracy and robustness of DNNs”
      20. [ICASSP 2023] H. Wang, C. Imes, S. Kundu, P. A. Beerel, S. P. Crago, J.P. Walters. “Quantpipe: Applying Adaptive Post-Training Quantization For Distributed Transformer Pipelines In Dynamic Edge Environments”
      21. [ICASSP 2023] G. Datta, Z. Liu, M. Kaiser, S. Kundu, J. Mathai, Z. Yin, A. P. Jacob, A. R. Jaiswal, P. A. Beerel. ‘In-Sensor & Neuromorphic Computing Are all You Need for Energy Efficient Computer Vision”
      22. [VLSI-SoC 2022] G. Datta*, S. Kundu*, Z. Yin*, J. Mathai, Z. Liu, Z. Wang, M. Tian, S. Lu, R. T. Lakkireddy, A. Schmidt, W. Abd-Almageed, A. P. Jacob, A. Jaiswal, P. A Beerel. “P2M-DeTrack: Processing-in-Pixel-in-Memory for Energy-efficient and Real-Time Multi-Object Detection and Tracking” [Nominated for the Best Paper Award]
      23. [ECCV Workshop on Distributed Smart Cameras 2022] G. Datta, Z. Yin, A. P. Jacob, A. Jaiswal, P. A Beerel. “Towards Energy-Efficient Hyperspectral Image Processing inside Camera Pixels”
      24. [Nature Scientific Reports 2022] G. Datta*, S. Kundu*, Z Yin*, RT Lakkireddy, PA Beerel, A Jacob, AR Jaiswal. “P2M: A Processing-in-Pixel-in-Memory Paradigm for Resource-Constrained TinyML Applications”
      25. [Frontiers Neuroscience 2022]G. Datta, S. Kundu, A. Jaiswal, P. A Beerel. “ACE-SNN: Algorithm-Hardware Co-Design of Energy-efficient & Low-Latency Deep Spiking Neural Networks for 3D Image Recognition”
      26. [ACM Trans. on Embedded Computing Systems 2022] S. Kundu, Y. Fu, B. Ye, P. A. Beerel, M. Pedram. “Towards Adversary Aware Non-Iterative Model Pruning Through Dynamic Network Rewiring of DNNs”
      27. [DATE 2022] S. Kundu, S. Wang, Q. Sun, P. A. Beerel, M. Pedram, “BMPQ: Bit-Gradient Sensitivity-Driven Mixed-Precision Quantization of DNNs from Scratch”.
      28. [NeurIPS 2021] S. Kundu, Q. Sun, Y. Fu, M. Pedram, P. A. Beerel, “Analyzing the Confidentiality of Undistillable Teachers in Knowledge Distillation” (initial work accepted at CVPR Workshop 2021).
      29. [ICCV 2021] S. Kundu, M. Pedram, P. A. Beerel, “HIRE-SNN: Harnessing the Inherent Robustness of Deep Spiking Neural Networks by Training with Crafted Input Noise”.
      30. [IJCNN 2021] G. Datta, S. Kundu, P. A. Beerel. “Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid input Encoding
      31. [ICASSP 2021] S. Kundu, S. Sundaresan, “AttentionLite: Towards Efficient Self-Attention Models for Vision”.
      32. [WACV 2021] S. Kundu, G.Datta, M. Pedram, P. A. Beerel, “Spike-Thrift: Towards Energy-Efficient Deep Spiking Neural Networks by Limiting Spiking Activity via Attention-Guided Compression”.
      33. [ASP-DAC 2021] S. Kundu, M. Nazemi, P. A. Beerel, M. Pedram, “DNR: A Tunable Robust Pruning Framework Through Dynamic Network Rewiring of DNNs”.
      34. [IEEE Trans. on Computers 2020] S. Kundu, M. Nazemi, M. Pedram, K. M. Chugg, P. A. Beerel, “Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks”.
      35. [Allerton 2019] S. Kundu*, S. Prakash*, H. Akrami, P. A. Beerel, K. M. Chugg, “pSConv: A Pre-defined Sparse Kernel Based Convolution for Deep CNNs”.
      36. [ISVLSI 2019] S. Kundu*, A. Fayyazi*, Shahin Nazarian, Peter A. Beerel, Massoud Pedram, “CSrram: Area-Efficient Low-Power Ex-Situ Training Framework for Memristive Neuromorphic Circuits Based on Clustered Sparsity”.

    Arxiv Preprints:

      1. S. Kundu, G. Datta, M. Pedram, P. A. Beerel, “Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression”, 2021.
      2. S. Kundu, H. Mostafa, S. Sridhar, S. Sundaresan, “Attention-based Image Upsampling”, 2020.
Skip to toolbar