2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-22.6
Paper Title TENSOR DECOMPOSITION VIA CORE TENSOR NETWORKS
Authors Jianfu Zhang, Shanghai Jiao Tong University, China; Zerui Tao, Tokyo University of Agriculture and Technology, Japan; Liqing Zhang, Shanghai Jiao Tong University, China; Qibin Zhao, RIKEN AIP, Japan
SessionIVMSP-22: Image & Video Sensing, Modeling and Representation
LocationGather.Town
Session Time:Thursday, 10 June, 14:00 - 14:45
Presentation Time:Thursday, 10 June, 14:00 - 14:45
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVSMR] Image & Video Sensing, Modeling, and Representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Tensor decomposition (TD) has shown promising performance in image completion and denoising. Existing methods always aim to decompose one tensor into latent factors or core tensors by optimizing a particular cost function based on a specific tensor model. These algorithms iteratively learn the optima from random initialization given any individual tensor, resulting in slow convergence and low efficiency. In this paper, we propose an efficient TD algorithm that aims to learn a global mapping from input tensors to latent core tensors, under the assumption that the mappings of multiple tensors might be shared or highly correlated. To this end, we train a deep neural network (DNN) to model the global mapping and then apply it to decompose a newly given tensor with high efficiency. Furthermore, the initial values of DNN are learned based on meta-learning methods. By leveraging the pretrained core tensor DNN, our proposed method enables us to perform TD efficiently and accurately. Experimental results demonstrate the significant improvements of our method over other TD methods in terms of speed and accuracy.