←back to thread

Francois Chollet is leaving Google

(developers.googleblog.com)
377 points xnx | 1 comments | | HN request time: 0.212s | source
Show context
osm3000 ◴[] No.42132316[source]
I loved Keras at the beginning of my PhD, 2017. But it was just the wrong abstraction: too easy to start with, too difficult to create custom things (e.g., custom loss function).

I really tried to understand TensorFlow, I managed to make a for-loop in a week. Nested for-loop proved to be impossible.

PyTorch was just perfect out of the box. I don't think I would have finished my PhD in time if it wasn't for PyTorch.

I loved Keras. It was an important milestone, and it made me believe deep learning is feasible. It was just...not the final thing.

replies(3): >>42132755 #>>42132756 #>>42133936 #
1. fchollet ◴[] No.42133936[source]
Keras 1.0 in 2016-2017 was much less flexible than Keras 3 is now! Keras is designed around the principle of "progressive disclosure of complexity": there are easy high-level workflows you can get started with, but you're always able to open up any component of the workflow and customize it with your own code.

For instance: you have the built-in `fit()` to train a model. But you can customize the training logic (while retaining access to all `fit()` features, like callbacks, step fusion, async logging and async prefetching, distribution) by writing your own `compute_loss()` method. And further, you can customize gradient handling by writing a custom `train_step()` method (this is low-level enough that you have to do it with backend APIs like `tf.GradientTape` or torch `backward()`). E.g. https://keras.io/guides/custom_train_step_in_torch/

Then, if you need even more control, you can just write your own training loop from scratch, etc. E.g. https://keras.io/guides/writing_a_custom_training_loop_in_ja...