←back to thread

Francois Chollet is leaving Google

(developers.googleblog.com)
377 points xnx | 1 comments | | HN request time: 0s | source
Show context
fchollet ◴[] No.42133844[source]
Hi HN, Francois here. Happy to answer any questions!

Here's a start --

"Did you get poached by Anthropic/etc": No, I am starting a new company with a friend. We will announce more about it in due time!

"Who uses Keras in production": Off the top of my head the current list includes Midjourney, YouTube, Waymo, Google across many products (even Ads started moving to Keras recently!), Netflix, Spotify, Snap, GrubHub, Square/Block, X/Twitter, and many non-tech companies like United, JPM, Orange, Walmart, etc. In total Keras has ~2M developers and powers ML at many companies big and small. This isn't all TF -- many of our users have started running Keras on JAX or PyTorch.

"Why did you decide to merge Keras into TensorFlow in 2019": I didn't! The decision was made in 2018 by the TF leads -- I was a L5 IC at the time and that was an L8 decision. The TF team was huge at the time, 50+ people, while Keras was just me and the open-source community. In retrospect I think Keras would have been better off as an independent multi-backend framework -- but that would have required me quitting Google back then. Making Keras multi-backend again in 2023 has been one of my favorite projects to work on, both from the engineering & architecture side of things but also because the product is truly great (also, I love JAX)!

replies(20): >>42133884 #>>42133989 #>>42134014 #>>42134046 #>>42134074 #>>42134092 #>>42134212 #>>42134240 #>>42134249 #>>42134580 #>>42134819 #>>42134892 #>>42134936 #>>42134946 #>>42135297 #>>42135510 #>>42135776 #>>42135839 #>>42136118 #>>42136329 #
c1b ◴[] No.42134580[source]
Hi Francois, I'm a huge fan of your work!

In projecting ARC challenge progress with a naive regression from the latest cycle of improvement (from 34% to 54%), it seems that a plausible estimate as to when the 85% target will be reached is sometime between late 2025 & mid 2026.

Supposing ARC challenge target is reached in the coming years, does this update your model of 'AI risk'? // Would this cause you to consider your article on 'The implausibility of intelligence explosion' to be outdated?

replies(1): >>42134645 #
fchollet ◴[] No.42134645[source]
This roughly aligns with my timeline. ARC will be solved within a couple of years.

There is a distinction between solving ARC, creating AGI, and creating an AI that would represent an existential risk. ARC is a stepping stone towards AGI, so the first model that solves ARC should have taught us something fundamental about how to create truly general intelligence that can adapt to never-seen-before problem, but it will likely not itself be AGI (due to be specialized in the ARC format, for instance). Its architecture could likely be adapted into a genuine AGI, after a few iterations -- a system capable of solving novel scientific problems in any domain.

Even this would not clearly lead to "intelligence explosion". The points in my old article on intelligence explosion are still valid -- while AGI will lead to some level of recursive self-improvement (as do many other systems!) the available evidence just does not point to this loop triggering an exponential explosion (due to diminishing returns and the fact that "how intelligent one can be" has inherent limitations brought about by things outside of the AI agent itself). And intelligence on its own, without executive autonomy or embodiment, is just a tool in human hands, not a standalone threat. It can certainly present risks, like any other powerful technology, but it isn't a "new species" out to get us.

replies(1): >>42139079 #
YeGoblynQueenne ◴[] No.42139079[source]
ARC as a stepping-stone for AGI? For me, ARC has lost all credibility. Your white paper that introduced it claimed that core knowledge priors are needed to solve it, yet all the systems that have any non-zero performance on ARC so far have made no attempt to learn or implement core knowledge priors. You have claimed at different times and in different forms that ARC is protected against memorisation-based Big Data approaches, but the systems that currently perform best on ARC do it by generating thousands of new training examples for some LLM, the quintessential memorisation-based Big Data approach.

I too, believe that ARC will soon be solved: in the same way that the Winograd Schema Challenge was solved. Someone will finally decide to generate a large enough dataset to fine-tune a big, deep, bad LLM and go to town, and I do mean on the private test set. If ARC was really, really a test of intelligence and therefore protected against Big Data approaches, then it wouldn't need to have a super secret hidden test set. Bongard Problems don't and they still stand undefeated (although the ANN community has sidestepped them in a sense, by generating and solving similar, but not identical, sets of problems, then claiming triumph anyway).

ARC will be solved and we won't learn anything at all from it, except that we still don't know how to test for intelligence, let alone artificial intelligence.

The worst outcome of all this is the collateral damage to the reputation of symbolic program synthesis which you have often name-dropped when trying to steer the efforts of the community towards it (other times calling it "discrete program search" etc). Once some big, compensating, LLM solves ARC, any mention of program synthesis will elicit nothing but sneers. "Program synthesis? Isn't that what Chollet thought would solve ARC? Well, we don't need that, LLMs can solve ARC just fine". Talk about sucking out all the air from the room, indeed.

replies(1): >>42142471 #
c1b ◴[] No.42142471[source]
Wow, you're the most passionate hater of ARC that I've seen. Your negativity seems laughably overblown to me.

Are there benchmarks that you prefer?

replies(1): >>42146362 #
YeGoblynQueenne ◴[] No.42146362[source]
This might be useful to you: if you want to have an interesting conversation, insulting your interlocutor is not the best way to go about it.
replies(1): >>42147180 #
CyberDildonics ◴[] No.42147180[source]
I don't think they are insulting anyone, I think they're just asking for numbers.
replies(1): >>42152270 #
1. YeGoblynQueenne ◴[] No.42152270[source]
What numbers?