2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-34.2
Paper Title MULTI-TASK LEARNING VIA SHARING INEXACT LOW-RANK SUBSPACE
Authors Xiaoqian Wang, Purdue University, United States; Feiping Nie, University of Texas at Arlington, United States
SessionMLSP-34: Subspace Learning and Applications
LocationGather.Town
Session Time:Thursday, 10 June, 15:30 - 16:15
Presentation Time:Thursday, 10 June, 15:30 - 16:15
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-SBML] Subspace and manifold learning
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Multi-task learning algorithms enhance learning performance by exploring the relations among multiple tasks. By pooling data from different yet relevant tasks together, tasks can benefit from each other in this jointly learning mechanism. In this paper, we study the relations among multiple tasks by properly learning their shared common subspace. Previous works usually constrain the shared subspace to be low-rank since tasks are assumed to be intrinsically related. However, this constraint is too strict for real applications when noise exists. Instead, we propose to detect an inexact low-rank subspace, which provides an approximation of the low-rank subspace. This makes our learned multi-task parameter matrix more robust in the circumstances of noise. We use alternating optimization algorithm to optimize our new objective and obtain an algorithm with the same time complexity as the single task learning. We provide extensive empirical results on both synthetic and benchmark datasets to illustrate the superiority of our method over other related multi-task learning methods. Our method shows apparent robustness in high portion of noise. Moreover, it possesses a major superiority when few training data are available. This is important in practical use, especially when accessing more data involves arduous work.