Menu Close

Publications

2022


N. Konstantinov, C. H. Lampert. Fairness-Aware PAC Learning from Corrupted Data.
JMLR 23 (2022) 1-60
...
Paper


2021

AC/DC: Alternating Compressed/Decompressed Training of Deep Neural NetworksAC/DC: ALTERNATING COMPRESSED/DECOMPRESSED TRAINING OF DEEP NEURAL NETWORKS
NeurIPS 2021
Peste, Iofinova, Vladu, Alistarh
Distributed Principal Component Analysis with Limited CommunicationDISTRIBUTED PRINCIPAL COMPONENT ANALYSIS WITH LIMITED COMMUNICATION
NeurIPS 2021
Alimisis, Davies, Vandereycken, Alistarh
When Are Solutions Connected in Deep Networks?WHEN ARE SOLUTIONS CONNECTED IN DEEP NETWORKS?
NeurIPS 2021
Nguyen
 Bréchet
 Mondelli
M-FAC: EFFICIENT MATRIX-FREE APPROXIMATIONS OF SECOND-ORDER INFORMATION
NeurIPS 2021
Frantar
 Kurtic
 AlistarhM-FAC: Efficient Matrix-Free Approximations of Second-Order Information
Project PaperProject Paper Project Paper
The Inductive Bias of ReLU Networks on Orthogonally Separable DataThe Inductive Bias of RELU networks on orthogonally separable Data
ICLR 2021
Phuong, Lampert

Byzantine-Resilient Non-Convex Stochastic Gradient DescentBYZANTINE-RESILIENT NON-CONVEX STOCHASTIC GRADIENT DESCENT
ICLR 2021
Allen-Zhu, Ebrahimian, Li,
Alistarh
Towards Tight Communication Lower Bounds for Distributed OptimizationTOWARDS TIGHT COMMUNICATION LOWER BOUNDS FOR DISTRIBUTED OPTIMIZATION
NeurIPS 2021
Korhonen
 Alistarh
Fully-Asynchronous Decentralized SGD with Quantized and Local UpdatesFULLY-ASYNCHRONOUS DECENTRALIZED SGD WITH QUANTIZED AND LOCAL UPDATES
NeurIPS 2021
Nadiradze, Sabour, Davies,
Li, Alistarh
Project PaperProject PaperProject PaperProject Paper
PCA Initialization for Approximate Message Passing in Rotationally Invariant ModelsPCA INITIALIZATION FOR APPROXIMATE MESSAGE PASSING IN ROTATIONALLY INVARIANT MODELS
NeurIPS 2021
Mondelli, Venkataramanan
Approximate Message Passing with Spectral Initialization for Generalized Linear ModelsAPPROXIMATE MESSAGE PASSING WITH SPECTRAL INITIALIZATION FOR GENERALIZED LINEAR MODELS
AISTATS 2021
Mondelli, Venkataramanan
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU NetworksTIGHT BOUNDS ON THE SMALLEST EIGENVALUE OF THE NEURAL TANGENT KERNEL FOR DEEP RELU NETWORKS
ICML 2021
Nguyen, Mondelli, Montufar
One-sided Frank-Wolfe algorithms for saddle problemsONE-SIDED FRANK-WOLFE ALGORITHMS FOR SADDLE PROBLEMS
ICML 2021
Kolmogorov, Pock
PaperPaperPaperPaper
Communication-Efficient Distributed Optimization with Quantized PreconditionersCOMMUNICATION-EFFICIENT DISTRIBUTED OPTIMIZATION WITH QUANTIZED PRECONDITIONERS
ICML 2021
Alimisis, Davies, Alistarh
Genomic architecture and prediction of censored time-to-event phenotypes with a Bayesian genome-wide analysisGENOMIC ARCHITECTURE AND PREDICTION OF CENSORED TIME-TO-EVENT PHENOTYPES WITH A BAYESIAN GENOME-WIDE ANALYSIS
Nature Communications
Ojavee, Robinson
Parallelism versus Latency in Simplified Successive-Cancellation Decoding of Polar Codes
PARALLELISM VERSUS LATENCY IN SIMPLIFIED SUCCESSIVE-CANCELLATION DECODING OF POLAR CODES
ISIT 2021
Hashemi, Mondelli, Fazeli, Vardy, Cioffi, Goldsmith
Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller CodesSPARSE MULTI-DECODER RECURSIVE PROJECTION AGGREGATION FOR REED-MULLER CODES
ISIT 2021
Fathollahi, Farsad, Hashemi, Mondelli
Project Paper Project Paper PaperPaper
New Bounds For Distributed Mean Estimation and Variance ReductionNEW BOUNDS FOR DISTRIBUTED MEAN ESTIMATION AND VARIANCE REDUCTION
ICLR 2021
Davies, Gurunanthan, Moshrefi, Ashkboos, Alistarh



Project Paper
Elastic Consistency: A Practical Consistency Model for Distributed Stochastic Gradient DescentELASTIC CONSISTENCY: A PRACTICAL CONSISTENCY MODEL FOR DISTRIBUTED STOCHASTIC GRADIENT DESCENT
AAAI 2021
Nadiradze, Markov, Chatterjee, Kungurtsev, Alistarh


Project Paper
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networksSPARSITY IN DEEP LEARNING: PRUNING AND GROWTH FOR EFFICIENT INFERENCE AND TRAINING IN NEURAL NETWORKS
JMLR
Hoefler, Alistarh, Ben-Nun, Dryden, Peste
Project Paper
Sublinear Latency for Simplified Successive Cancellation Decoding of Polar CodesSUBLINEAR LATENCY FOR SIMPLIFIED SUCCESSIVE CANCELLATION DECODING OF POLAR CODES
IEEE Transactions on Wireless Communications
Mondelli, Hashemi, Cioffi, Goldsmith
Paper