SmaQ: Smart quantization for DNN training by exploiting value clustering

Published in IEEE Computer Architecture Letters, 2021

Citation: Nima Shoghi, Andrei Bersatti, Moinuddin Qureshi, Hyesoon Kim, IEEE Computer Architecture Letters, 2021. https://doi.org/10.1109/LCA.2021.3108505

Introduces a smart quantization technique that reduces memory usage during neural network training by up to 6.7x while maintaining accuracy by exploiting the normal distribution properties of neural network values.

Access paper here