Research Reading: Replacing Back-Propagation With The Forward-Forward Algorithm

Jonathan Bechtel

Details
For this research reading, we'll discuss the latest optimization method proposed by Geoffrey Hinton: the forward-forward algorithm.
It's a technique that, if successful, could replace back-propagation for neural-network training. The gist of its approach is that instead of calculating the gradients of a model's weights after each forward pass, you'd be able to run two-separate forward passes, and the second one would be run with data generated by the neural-network itself, and would contain "negative data" that's generated from the model's loss function.
The potential benefit from this approach is two-fold:
- The forward pass is much more computationally lightweight than the backward pass
- Information from the second forward pass could be stored offline, so models would be able to be run continuously in realtime when ingesting large amounts of data for training, making training much more composable.
While this event is FREE, tickets are required & space is limited!
Attend this event