2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information

2021 IEEE International Conference on Acoustics, Speech and Signal Processing

6-11 June 2021 • Toronto, Ontario, Canada

Extracting Knowledge from Information
Login Paper Search My Schedule Paper Index Help

My ICASSP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLSP-14.1
Paper Title HEBBNET: A SIMPLIFIED HEBBIAN LEARNING FRAMEWORK TO DO BIOLOGICALLY PLAUSIBLE LEARNING
Authors Manas Gupta, Arulmurugan Ambikapathi, Savitha Ramasamy, Institute for Infocomm Research, A*STAR, Singapore
SessionMLSP-14: Learning Algorithms 1
LocationGather.Town
Session Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Time:Wednesday, 09 June, 13:00 - 13:45
Presentation Poster
Topic Machine Learning for Signal Processing: [MLR-LEAR] Learning theory and algorithms
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Backpropagation has revolutionized neural network training however, its biological plausibility remains questionable. Hebbian learning, a completely unsupervised and feedback free learning technique is a strong contender for a biologically plausible alternative. However, so far, it has either not achieved high accuracy performance vs. backprop or the training procedure has been very complex. In this work, we introduce a new Hebbian learning based neural network, called HebbNet. At the heart of HebbNet is a new Hebbian learning rule, that we build-up from first principles, by adding two novel algorithmic updates to the basic Hebbian learning rule. This new rule makes Hebbian learning substantially simpler, while also improving performance. Compared to state-of-the-art, we improve training dynamics by reducing the number of training epochs from 1500 to 200 and making training a one-step process from a two-step process. We also reduce heuristics by reducing hyper-parameters from 5 to 1, and number of search runs for hyper-parameter tuning from 12,600 to 13. Notwithstanding this, HebbNet still achieves strong test performance on MNIST and CIFAR-10 datasets vs. state-of-the-art.