2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDIVMSP-26.4
Paper Title AN ATTENTION BASED WAVELET CONVOLUTIONAL MODEL FOR VISUAL SALIENCY DETECTION
Authors Reshmi Bhooshan, College of Engineering, Trivandrum, India; Suresh K., Govt. Engineering College, Barton Hill, India
SessionIVMSP-26: Attention for Vision
LocationGather.Town
Session Time:Thursday, 10 June, 16:30 - 17:15
Presentation Time:Thursday, 10 June, 16:30 - 17:15
Presentation Poster
Topic Image, Video, and Multidimensional Signal Processing: [IVTEC] Image & Video Processing Techniques
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The emergence of deep neural architectures greatly enhanced the accuracy of salient region detection algorithms that plays a vital role in computer vision applications. However, the accurate extraction of regions with fine boundaries still remains as a challenge. In this work, an attention based Wavelet Convolutional Neural Network (WCNN) is implemented that efficiently extracts the spatial, spectral and semantic features of the image in multiple resolution and it turns out to be suitable for locating the visually salient regions. Further enhancement of the fine boundaries of the predicted map is made possible by the inclusion of a combinational loss function of balanced cross entropy loss, SSIM loss and edge loss. The effectiveness of the method is evaluated using three benchmark datasets and the results shows better performance achieving a minimum Mean Absolute Error (MAE) of 0.032 and maximum F-measure of 0.938.