2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

Technical Program

Paper Detail

Paper IDMLSP-18.5
Paper Title SPARSE GRAPH BASED SKETCHING FOR FAST NUMERICAL LINEAR ALGEBRA
Authors Dong Hu, Rensselaer Polytechnic Institute, United States; Shashanka Ubaru, IBM, United States; Alex Gittens, Rensselaer Polytechnic Institute, United States; Kenneth Clarkson, Lior Horesh, Vassilis Kalantzis, IBM, United States
SessionMLSP-18: Matrix Factorization and Applications
LocationGather.Town
Session Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Time:Wednesday, 09 June, 14:00 - 14:45
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-MFC] Matrix factorizations/completion
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Virtual Presentation  Click here to watch in the Virtual Conference
Abstract In recent years, a variety of randomized constructions of sketching matrices have been devised, that have been used in fast algorithms for numerical linear algebra problems, such as least squares regression, low-rank approximation, and the approximation of leverage scores. A key property of sketching matrices is that of subspace embedding. In this paper, we study sketching matrices that are obtained from bipartite graphs that are sparse, i.e., have left degree s that is small. In particular, we explore two popular classes of sparse graphs, namely, expander graphs and magical graphs. For a given subspace $U \subseteq \R^n$ of dimension k, we show that the magical graph with left degree s=2 yields a $(1 ± \eps)$ l2-subspace embedding for U, if the number of right vertices (the sketch size) $m = O({k^2}/{\eps^2})$. The expander graph with $s = O({\log k}/{\eps})$ yields a subspace embedding for $m =O({k \log k}/{\eps^2})$. We also discuss the construction of sparse sketching matrices with reduced randomness using expanders based on error-correcting codes. Empirical results on various synthetic and real datasets show that these sparse graph sketching matrices work very well in practice.