2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-7.4
Paper Title KERNEL LEARNING WITH TENSOR NETWORKS
Authors Kriton Konstantinidis, Shengxi Li, Danilo P. Mandic, Imperial College London, United Kingdom
SessionMLSP-7: Tensor Signal Processing
LocationGather.Town
Session Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Time:Tuesday, 08 June, 14:00 - 14:45
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-TNSR] Tensor-based signal processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The expressive power of Gaussian Processes (GPs) is largely attributed to their kernel function, which highlights the crucial role of kernel design. Efforts in this direction include modern Neural Network (NN) based kernel design, which despite success, suffers from the lack of interpretability and tendency to overfit. To this end, we introduce a Tensor Network (TN) approach to learning kernel embeddings, with a TN serving to map the input to a low dimensional manifold, where a suitable base kernel function can be applied. The proposed framework allows for joint learning of the TN and base kernel parameters using stochastic variational inference, while leveraging on the low-rank regularization and multi-linear nature of TNs to boost model performance and provide enhanced interpretability. Performance evaluation within the regression paradigm against TNs and Deep Kernels demonstrates the potential of the framework, providing conclusive evidence for promising future extensions to other learning paradigms.