←back to thread

251 points slyall | 7 comments | | HN request time: 0.001s | source | bottom
Show context
vdvsvwvwvwvwv ◴[] No.42057939[source]
Lesson: ignore detractors. Especially if their argument is "dont be a tall poppy"
replies(3): >>42057987 #>>42057988 #>>42058029 #
1. xanderlewis ◴[] No.42058029[source]
Unfortunately, they’re usually right. We just don’t hear about all the time wasted.
replies(2): >>42058260 #>>42059958 #
2. blitzar ◴[] No.42058260[source]
On several occasions I have heard "they said it couldn't be done" - only to discover that yes it is technically correct, however, "they" was on one random person who had no clue and anyone with any domain knowledge said it was reasonable.
replies(1): >>42058375 #
3. friendzis ◴[] No.42058375[source]
Usually when I hear "they said it couldn't be done", it is used as triumphant downplay of legitimate critique. If you dig deeper that "couldn't be done" usually is in relation to some constraints or performance characteristics, which the "done" thing still does not meet, but the goalposts have already been moved.
replies(2): >>42061989 #>>42070205 #
4. vdvsvwvwvwvwv ◴[] No.42059958[source]
What if the time wasted is part of the search? The hive wins but a bee may not. (Capitalism means some bees win too)
replies(1): >>42061610 #
5. xanderlewis ◴[] No.42061610[source]
It is. But most people are not interested in simply being ‘part of the search’ — they want a career, and that relies on individual success.
6. Ukv ◴[] No.42061989{3}[source]
> that "couldn't be done" usually is in relation to some constraints or performance characteristics, which the "done" thing still does not meet

I'd say theoretical proofs of impossibility tend to make valid logical deductions within the formal model they set up, but the issue is that model often turns out to be a deficient representation of reality.

For instance, Minsky and Papert's Perceptrons book, credited in part with prompting the 1980s AI winter, gives a valid mathematical proof about inability of networks within their framework to represent the XOR function. This function is easily solved by multilayer neural networks, but Minsky/Papert considered those to be a "sterile" extension and believed neural networks trained by gradient descent would fail to scale up.

Or more contemporary, Gary Marcus has been outspoken since 2012 that deep learning is hitting a wall - giving the example that a dense network trained on just `1000 -> 1000`, `0100 -> 0100`, `0010 -> 0010` can't then reliably predict `0001 -> 0001` because the fourth output neuron was never activated in training. Similarly, this function is easily solved by transformers representing input/output as a sequence of tokens thus not needing to light up an untrained neuron to give the answer (nor do humans when writing/speaking the answer).

If I claimed that it was topologically impossible to drink a Capri-Sun, then someone comes along and punctures it with a straw (an unaccounted for advancement from the blindspot of my model), I could maybe cling on and argue that my challenge remains technically true and unsolved because that violates one of the constraints I set out - but at the very least the relevance of my proof to reality has diminished and it may no longer support the viewpoints/conclusions I intended it to ("don't buy Capri-Sun"). Not to say that theoretical results can't still be interesting in their own right - like the halting problem, which does not apply to real computers.

7. marcosdumay ◴[] No.42070205{3}[source]
It's extremely common that legitimate critique gets used to illegitimately attack people doing things differently enough that the relative importance of several factors change.

This is really, really common. And it's done both by mistake and in bad faith. In fact, it's a guarantee that once anybody tries anything different enough, they'll be constantly attacked this way.