"Whenever these kind of papers come out I skim it looking for where they actually do backprop.
Check the pseudo code of their algorithms.
"Update using gradient based optimizations""
replies(4):
Check the pseudo code of their algorithms.
"Update using gradient based optimizations""
>"We believe this work takes a first step TOWARDS introducing a new family of GRADIENT-FREE learning methods"
I.e. for the time being, authors can't convince themselves not to take advantage of efficient hw for taking gradients
(*Checks that Oxford University is not under sanctions*)