2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSPTM-15.2
Paper Title LEARNING SPARSE GRAPH LAPLACIAN WITH K EIGENVECTOR PRIOR VIA ITERATIVE GLASSO AND PROJECTION
Authors Saghar Bagheri, Gene Cheung, York University, Canada; Antonio Ortega, University of Southern California, United States; Fen Wang, Xidian University, China
SessionSPTM-15: Graph Topology Inference and Clustering
LocationGather.Town
Session Time:Thursday, 10 June, 14:00 - 14:45
Presentation Time:Thursday, 10 June, 14:00 - 14:45
Presentation Poster
Topic Signal Processing Theory and Methods: [SIPG] Signal and Information Processing over Graphs
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Learning a suitable graph is an important precursor to many graph signal processing (GSP) pipelines, such as graph spectral signal compression and denoising. Previous graph learning algorithms either i) make no assumptions on connectivity beyond graph sparsity, or ii) make simple graph edge assumptions such as positive edges only. In this paper, given an empirical covariance matrix computed from data as input, we consider a structural assumption on the precision matrix directly: the first K eigenvector of the precision matrix P are pre-selected or optimized based on domain-specific criteria such as computation constraint, and the remaining eigenvectors are then learned from data. We first prove that the subspace of real positive semi-definite matrices with K eigenvector being {u_k} in a defined Hilbert space is a convex cone. We then constructed an efficient projection operator to project a given matrix P to the defined convex cone, inspired by the Gram-Schmidt procedure. Finally, we design an efficient hybrid graphical lasso/projection algorithm to compute the most suitable precision matrix P given an empirical covariance matrix. Experimental results show that given the first K eigenvectors as a prior, our algorithm outperformed competing graph learning schemes noticeably in a variety of graph comparison metrics.