Most active commenters
  • nybsjytm(4)
  • whatshisface(4)

←back to thread

248 points rishicomplex | 22 comments | | HN request time: 1.831s | source | bottom
1. nybsjytm ◴[] No.42166977[source]
Why have they still not released a paper aside from a press release? I have to admit I still don't know how auspicious it is that running google hardware for three days apiece was able to find half-page long solutions, given that the promise has always been to solve the Riemann hypothesis with the click of a button. But of course I do recognize that it's a big achievement relative to previous work in automatic theorem proving.
replies(2): >>42167197 #>>42167309 #
2. whatshisface ◴[] No.42167197[source]
I don't know why so few people realize this, but by solving any of the problems their performance is superhuman for most reasonable definitions of human.

Talking about things like solving the Reimman hypothesis in so many years assumes a little too much about the difficulty of problems that we can't even begin to conceive of a solution for. A better question is what can happen when everybody has access to above average reasoning. Our society is structured around avoiding confronting people with difficult questions, except when they are intended to get the answer wrong.

replies(3): >>42167281 #>>42167469 #>>42167489 #
3. GregarianChild ◴[] No.42167281[source]
We know that any theorem that is provable at all (in the chosen foundation of mathematics) can be found by patiently enumerating all possible proofs. So, in order to evaluate AlphaProof's achievements, we'd need to know how much of a shortcut AlphaProof achieved. A good proxy for that would be the total energy usage for training and running AlphaProof. A moderate proxy for that would be the number of GPUs / TPUs that were run for 3 days. If it's somebody's laptop, it would be super impressive. If it's 1000s of TPUs, then less so.
replies(3): >>42167389 #>>42167421 #>>42169885 #
4. GregarianChild ◴[] No.42167309[source]
Google stopped publishing interesting AI work since they had their AI lead taken away by OpenAI, and mostly with tech that was pioneered, but not monetised by Google like transformers.

I imagine they are under pressure not to make this mistake again.

5. Onavo ◴[] No.42167389{3}[source]
> We know that any theorem that is provable at all (in the chosen foundation of mathematics) can be found by patiently enumerating all possible proofs.

Which computer science theorem is this from?

replies(4): >>42167461 #>>42167504 #>>42167773 #>>42168056 #
6. zeroonetwothree ◴[] No.42167461{4}[source]
It’s just an obvious statement. If a proof exists, you will eventually get to it.
replies(1): >>42170467 #
7. zeroonetwothree ◴[] No.42167469[source]
Well, multiply two large numbers instantly is a superhuman feat a calculator can do. I would hope we are going for a higher bar, like “useful”. Let’s see if this can provide proofs of novel results.
replies(3): >>42167522 #>>42167670 #>>42167858 #
8. nybsjytm ◴[] No.42167489[source]
> A better question is what can happen when everybody has access to above average reasoning. Our society is structured around avoiding confronting people with difficult questions, except when they are intended to get the answer wrong.

What does this have to do with a hypothetical automatic theorem prover?

replies(1): >>42167681 #
9. kadoban ◴[] No.42167504{4}[source]
It's a direct consequence of the format of a proof. They're finite-length sequences of a finite alphabet of symbols, so of course they're enumerable (the same algorithm you use to count works to enumerate them).

If you want a computer to be able to tell that it found a correct proof once it enumerates it, you need a bit more than that (really just the existence of automated proof checkers is enough for that).

10. nybsjytm ◴[] No.42167522{3}[source]
It's worth emphasizing that it's been possible for years to use an automatic theorem prover to prove novel results. The whole problem is to get novel interesting results.
11. whatshisface ◴[] No.42167670{3}[source]
The ability to multiply two numbers superhumanly has already transformed society.
12. whatshisface ◴[] No.42167681{3}[source]
Logic is pretty much absent from our culture and daily life, but that could be due to its limited supply.
replies(1): >>42167744 #
13. nybsjytm ◴[] No.42167744{4}[source]
Being logical in social life is pretty much completely different from being logical in a mathematical argument, especially in a formal theorem proving environment. (Just try to write any kind of cultural proposition in a formal language!)
replies(1): >>42168090 #
14. E_Bfx ◴[] No.42167773{4}[source]
I guess it is tautological from the definition of "provable". A theorem is provable by definition if there is a finite well-formulated formula that has the theorem as consequence (https://en.wikipedia.org/wiki/Theorem paragraph theorem in logic)
replies(1): >>42168075 #
15. rowanG077 ◴[] No.42167858{3}[source]
That superhuman capability of "multiplying two large numbers instantly" has transformed human society like not even plumbing has. I really can't see how this you could not consider this useful.
16. Xcelerate ◴[] No.42168075{5}[source]
Not sure it’s a tautology. It’s not obvious that a recursively enumerable procedure exists for arbitrary formal systems that will eventually reach all theorems derivable via the axioms and transformation rules. For example, if you perform depth-first traversal, you will not reach all theorems.

Hilbert’s program was a (failed) attempt to determine, loosely speaking, whether there was a process or procedure that could discover all mathematical truths. Any theorem depends on the formal system you start with, but the deeper implicit question is: where do the axioms come from and can we discover all of them (answer: “unknown” and “no”)?

replies(2): >>42172070 #>>42181838 #
17. whatshisface ◴[] No.42168090{5}[source]
That's the way things are now, but this regime came about when proving things took intense concentration and specialized skills that very few people had. Contrast going to look something up in a library with googling something during a conversation.
18. Davidzheng ◴[] No.42169885{3}[source]
The shortcut vs enumeration is definitely enormous right? just take average shannon entropy to the exponent of the length for example will be probably > heat death (or whatever death) of universe on all of human compute (I'm assuming I didn't bother to check but it seems intuitively true by a margin)
19. j16sdiz ◴[] No.42170467{5}[source]
Only if we take AC, I guess?
replies(1): >>42171957 #
20. Tainnor ◴[] No.42171957{6}[source]
No, this has nothing to do with choice.
21. Tainnor ◴[] No.42172070{6}[source]
It's "obvious" in the sense that it's a trivial corollary of the completeness theorem (so it wouldn't be true for second order logic, for example).

Hilbert's program failed in no contradiction to what GP wrote because the language of FOL theorems is only recursively enumerable and not decidable. It's obvious that something is true if you've found a proof, but if you haven't found a proof yet, is the theorem wrong or do you simply have to wait a little longer?

22. Turneyboy ◴[] No.42181838{6}[source]
Yeah but width-first immediately gives you the solution for any finite alphabet. So in that sense it is trivial.