Most active commenters

    ←back to thread

    248 points rishicomplex | 25 comments | | HN request time: 1.039s | source | bottom
    1. throwaway713 ◴[] No.42167915[source]
    Anyone else feel like mathematics is sort of the endgame? I.e., once ML can do it better than humans, that’s basically it?
    replies(11): >>42168138 #>>42168188 #>>42168552 #>>42169054 #>>42169223 #>>42169808 #>>42170065 #>>42170119 #>>42170460 #>>42171863 #>>42197083 #
    2. margorczynski ◴[] No.42168138[source]
    I doubt it. Math has the property that you have a way to 100% verify that what you're doing is correct with little cost (as it is done with Lean). Most problems don't have anything close to that.
    replies(2): >>42168205 #>>42169130 #
    3. raincole ◴[] No.42168188[source]
    If "better than humans" means when you give it a real world problem, it gives you a mathematical model to describe it (and does it better than human experts), then yes, it's the end game.

    If it just solves a few formalized problems with formalized theorems, not so much. You can write a program that solves ALL the problems under formalized theorems already. It just runs very slowly.

    replies(1): >>42168971 #
    4. exe34 ◴[] No.42168205[source]
    to be fair, humans also have to run experiments to discover whether their models fit nature - AI will do it too.
    replies(1): >>42168242 #
    5. margorczynski ◴[] No.42168242{3}[source]
    These kind of experiments are many times orders of magnitude more costly (time, energy, money, safety, etc.) than verifying a mathematical proof with something like Lean. That's why many think math will be one of the first to crack with AI as there is a relatively cheap and fast feedback loop available.
    6. ◴[] No.42168552[source]
    7. titanomachy ◴[] No.42168971[source]
    I don’t think you can gloss over the importance of computational tractability here. A human could also start enumerating every possible statement in ZFC, but clearly that doesn’t make them a mathematician.
    8. technotony ◴[] No.42169054[source]
    Yes, because if AI can do maths then it can use that to improve the efficiency/quality of it's algorithms to self improve...
    replies(2): >>42169122 #>>42175830 #
    9. nybsjytm ◴[] No.42169122[source]
    The quality of AI algorithms is not based on formal mathematics at all. (For example, I'm unaware of even one theorem relevant to going from GPT-1 to GPT-4.) Possibly in the future it'll be otherwise though.
    10. AlotOfReading ◴[] No.42169130[source]
    Math doesn't have a property that you can verify everything you're doing is correct with little cost. Humans simply tend to prefer theorems and proofs that are simpler.
    replies(1): >>42172952 #
    11. GuB-42 ◴[] No.42169223[source]
    Not the endgame by far. Maybe the endgame for LLMs, and I am not even convinced.

    Maths is detached from reality. An AI capable of doing math better than humans may not be able do drive a car, as driving a car requires a good understanding of the world, it has to recognize object and understand their behavior, for example, understanding that a tree won't move but a person might, but it will move slower than another car. It has to understand the physics of the car: inertia, grip, control,... It may even have to understand human psychology and make ethical choices.

    Fully autonomous robots would be the endgame.

    replies(1): >>42170506 #
    12. jdietrich ◴[] No.42169808[source]
    Humans are terrible at anything you learn at university and incredibly good at most things you learn at trade school. In absolute terms, mathematics is much easier than laying bricks or cutting hair.

    https://en.wikipedia.org/wiki/Moravec%27s_paradox

    replies(2): >>42170898 #>>42171908 #
    13. abrookewood ◴[] No.42170065[source]
    I mean ... calculators can do better at mathematics than most of us. I don't think they are going to threaten us anytime soon.
    14. AlexCoventry ◴[] No.42170119[source]
    There are important forms of discernment and judgement which aren't captured by mathematics.
    15. awanderingmind ◴[] No.42170460[source]
    The end of an era perhaps, but not 'the end' - another commenter has already mentioned Moravec's paradox: https://en.wikipedia.org/wiki/Moravec%27s_paradox

    It will be interesting if/when these models start proving major open problems, e.g. the Riemann Hypothesis. The sociological impact on the mathematical community would certainly be acute, and likely lead to a seismic shift in the understanding of what research-level mathematics is 'for'. This discussion already appears to be in progress. As an outsider I have no idea what the timeline is for such things (2 years? 10? 100?).

    On the plus side, AlphaProof has the benefit over ordinary LLMs in their current form in that it does not pollute our common epistemological well, and its output is eminently interrogable (if you know Lean at last).

    16. ykonstant ◴[] No.42170506[source]
    >An AI capable of doing math better than humans may not be able do drive a car,

    Noo, but my excuse for being unable to drive a car is precisely that I am a quirky mathematician focused on research!

    17. youoy ◴[] No.42170898[source]
    I would say that "narrow" mathematics (finding a proof of a given statement that we suspect has a proof using a formal language) is much easier that "generally" laying brick or cutting hair.

    But I cannot see how consistently doing general mathematics (as in finding interesting and useful statements to proof, and then finding the proofs) is easier than consistently cutting hair/driving a car.

    We might get LLM level mathematics, but not Human level mathematics, in the same way that we can get LLM level films (something like Avengers, or the final season of GoT), but we are not going to get Human level films.

    I suspect that there are no general level mathematics without the geometric experience of humans, so for general level mathematics one has to go through perceptions and interactions with reality first. In that case, general mathematics is one level over "laying bricks or cutting hair", so more complex. And the paradox is only a paradox for superficial reasoning.

    replies(1): >>42172137 #
    18. empath75 ◴[] No.42171863[source]
    Computers have been better than us at calculation since about a week after computers were invented.

    If a computer proves the Reimann Hypothesis, someone will say "Oh of course, writing a proof doesn't require intelligence, it's merely the dumb application of logical rules, but only a human could have thought of the conjecture to begin with."

    19. looofooo0 ◴[] No.42171908[source]
    Sure but checking everything is correctly wired, plug-in, cut etc. Everything needes is thought of? There is plenty of things an AI could do to help a trades man.
    20. staunton ◴[] No.42172137{3}[source]
    > But I cannot see how consistently doing general mathematics (as in finding interesting and useful statements to proof, and then finding the proofs) is easier than consistently cutting hair/driving a car.

    The main "absolute" difficulty there is in understanding and shaping what the mathematical audience thinks is "interesting". So it's really a marketing problem. Given how these tools are being used for marketing, I would have high hopes, at least for this particular aspect...

    replies(1): >>42173177 #
    21. thrance ◴[] No.42172952{3}[source]
    You can, in principle, formalize any correct mathematical proof and verify its validity procedurally with a "simple" algorithm, that actually exists (See Coq, Lean...). Coming up with the proof is much harder, and deciding what to attempt to prove even harder, though.
    replies(1): >>42173253 #
    22. youoy ◴[] No.42173177{4}[source]
    Is it really marketing in general? I can agree with some of it, but for me the existence of the term "low hanging fruit" to describe some statements says otherwise...
    23. AlotOfReading ◴[] No.42173253{4}[source]
    You can verify it with a simple algorithm, but that verification won't always be cheap. If it was, curry-howard would be incredibly uninteresting. It only seems cheap because we usually have little interest in exploring the expensive theorems. Sometimes we do though and get things like the 4 color theorem whose proof verification amounts to combinatorial search.
    24. SkiFire13 ◴[] No.42175830[source]
    ... or it might prove that it's impossible to self-improve given the current constraits
    25. roncesvalles ◴[] No.42197083[source]
    I still think humans understand less than half of all the physics there is to understand.