Alistarh Group
Deep Algorithms and Systems Lab (DASLab)
Artificial Intelligence has made massive progress over the past decade, with breakthroughs across several applications and tasks. Yet, the sustainability of this pace of progress is in question: the computation required to train and deploy state-of-the-art AI models has been rising exponentially, potentially hindering innovation, and leading to inequalities in terms of expertise and economic benefits.
The Alistarh group works to remove these barriers to the democratization of AI, by creating training and inference algorithms that are significantly more efficient than conventional ones. For this, we develop new algorithms for learning over compressed (e.g., sparse or quantized) representations, as well as efficient systems implementations that can leverage compression gains in practice.
Team
Current Projects
Efficient Training and Inference for Massive AI Models | Large-scale distributed machine learning | Adaptive concurrent data structures | Fundamental limits of distributed computation
Publications
Frantar E. 2024. Compressing large neural networks : Algorithms, systems and scaling laws. Institute of Science and Technology Austria. View
Markov I. 2024. Communication-efficient distributed training of deep neural networks: An algorithms and systems perspective. Institute of Science and Technology Austria. View
Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. 2024. Extreme compression of large language models via additive quantization. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 12284–12303. View
Nikdan M, Tabesh S, Crncevic E, Alistarh D-A. 2024. RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning vol. 235, 38187–38206. View
Moakhar AS, Iofinova EB, Frantar E, Alistarh D-A. 2024. SPADE: Sparsity-guided debugging for deep neural networks. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 45955–45987. View
ReX-Link: Dan Alistarh
Career
Since 2022 Professor, Institute of Science and Technology Austria (ISTA)
2017 – 2022 Assistant Professor, Institute of Science and Technology Austria (ISTA)
2016 – 2017 “Ambizione Fellow”, Computer Science Department, ETH Zurich, Switzerland
2014 – 2016 Researcher, Microsoft Research, Cambridge, UK
2014 – 2016 Morgan Fellow, Downing College, University of Cambridge, UK
2012 – 2013 Postdoc, Massachusetts Institute of Technology, Cambridge, USA
2012 PhD, EPFL, Lausanne, Switzerland
Selected Distinctions
2023 ERC Proof of Concept Grant
2018 ERC Starting Grant
2015 Awarded Swiss National Foundation “Ambizione” Fellowship
2014 Elected Morgan Fellow at Downing College, University of Cambridge
2012 Postdoctoral Fellowship of the Swiss National Foundation
2011 Best Paper Award at the International Conference on Distributed Computing and Networking