Skip to main content

Alistarh Group

Distributed Algorithms and Systems

Distribution has been one of the key trends in computing over the last decade: processor architectures are multi-core, while large-scale systems for machine learning and data processing can be distributed across several machines or even data centers. The Alistarh group works to enable these applications by creating algorithms that scale—that is, they improve their performance when more computational units are available.


This fundamental shift to distributed computing performed puts forward exciting open questions: How do we design algorithms to extract every last bit of performance from the current generation of architectures? How do we design future architectures to support more scalable algorithms? Are there clean abstractions to render high-performance distribution accessible to programmers? The group’s research is focused on answering these questions. In particular, they are interested in designing efficient, practical algorithms for fundamental problems in distributed computing, in understanding the inherent limitations of distributed systems, and in developing new ways to overcome these limitations. One particular area of focus over the past few years has been distributed machine learning.




Team

Image of Jiale Chen

Jiale Chen

PhD Student

Image of Alexander Fedorov

Alexander Fedorov

PhD Student

Image of Eugenia Iofinova

Eugenia Iofinova

PhD Student


Image of Andrej Jovanovic

Andrej Jovanovic

Scientific Intern

Image of Eldar Kurtic

Eldar Kurtic

Research Technician Machine Learning

+43 2243 9000 2081

Image of Roberto Lopez Castro

Roberto Lopez Castro

Predoctoral Visiting Scientist



Image of Teodor-Alexandru Szente

Teodor-Alexandru Szente

Predoctoral Visiting Scientist


Current Projects

Efficient Training and Inference for Massive Models | Distributed machine learning | Concurrent data structures and applications | Molecular computation


Publications

Frantar E. 2024. Compressing large neural networks : Algorithms, systems and scaling laws. Institute of Science and Technology Austria. View

Markov I. 2024. Communication-efficient distributed training of deep neural networks: An algorithms and systems perspective. Institute of Science and Technology Austria. View

Egiazarian V, Panferov A, Kuznedelev D, Frantar E, Babenko A, Alistarh D-A. 2024. Extreme compression of large language models via additive quantization. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 12284–12303. View

Nikdan M, Tabesh S, Crncevic E, Alistarh D-A. 2024. RoSA: Accurate parameter-efficient fine-tuning via robust adaptation. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning vol. 235, 38187–38206. View

Moakhar AS, Iofinova EB, Frantar E, Alistarh D-A. 2024. SPADE: Sparsity-guided debugging for deep neural networks. Proceedings of the 41st International Conference on Machine Learning. ICML: International Conference on Machine Learning, PMLR, vol. 235, 45955–45987. View

View All Publications

ReX-Link: Dan Alistarh


Career

Since 2017 Assistant Professor, Institute of Science and Technology Austria (ISTA)
2016 – 2017 “Ambizione Fellow”, Computer Science Department, ETH Zurich, Switzerland
2014 – 2016 Researcher, Microsoft Research, Cambridge, UK
2014 – 2016 Morgan Fellow, Downing College, University of Cambridge, UK
2012 – 2013 Postdoc, Massachusetts Institute of Technology, Cambridge, USA
2012 PhD, EPFL, Lausanne, Switzerland


Selected Distinctions

2023 ERC Proof of Concept Grant
2018 ERC Starting Grant
2015 Awarded Swiss National Foundation “Ambizione” Fellowship
2014 Elected Morgan Fellow at Downing College, University of Cambridge
2012 Postdoctoral Fellowship of the Swiss National Foundation
2011 Best Paper Award at the International Conference on Distributed Computing and Networking


Additional Information

Dan Alistarh’s website



theme sidebar-arrow-up
Back to Top