# Efficient Distributed Learning with Sparsity

### Abstract

We propose a novel, efficient approach for distributed sparse learning with observations randomly partitioned across machines. In each round of the proposed method, worker machines compute the gradient of the loss on local data and the master machine solves a shifted $ell_1$ regularized loss minimization problem. After a number of communication rounds that scales only logarithmically with the number of machines, and independent of other parameters of the problem, the proposed approach provably matches the estimation error bound of centralized methods.

Type
Publication
Proceedings of the 34th International Conference on Machine Learning
##### Jialei Wang
###### PhD (2013-2018)

Jialei received his PhD in Computer Science at University of Chicago in June 2019. His advisors were Nathan Srebro and Mladen Kolar.