←back to thread

190 points baruchel | 6 comments | | HN request time: 0.793s | source | bottom
Show context
zerof1l ◴[] No.44421424[source]
Here's the gist:

For nearly 50 years theorists believed that if solving a problem takes t steps, it should also need roughly t bits of memory: 100 steps - 100bits. To be exact t/log(t).

Ryan Williams found that any problem solvable in time t needs only about sqrt(t) bits of memory: a 100-step computation could be compressed and solved with something on the order of 10 bits.

replies(7): >>44422352 #>>44422406 #>>44422458 #>>44422855 #>>44423750 #>>44424342 #>>44425220 #
taneq ◴[] No.44422458[source]
Huh? What did any of those theorists think about the phrase "time/space tradeoff"?
replies(2): >>44422643 #>>44422948 #
mhuffman ◴[] No.44422643[source]
They were likely the ones that invented the phrase.
replies(2): >>44423188 #>>44423268 #
1. mort96 ◴[] No.44423268[source]
Really? But the whole idea behind the space/time trade-off is that in a bunch of cases, if you want to compute something in less memory, you need to solve it in more steps; or if you want to solve something in fewer steps, you need to use more memory.

This seems to wildly contradict the idea that the amount of memory needed is roughly proportional to the number of steps.

replies(2): >>44424665 #>>44425736 #
2. eru ◴[] No.44424665[source]
The amount of steps is an (over)estimate on the amount of memory needed.

You can write some trivial programs that need this much memory. But most of them can be optimised to use less space (and still compute the same answer).

The big question was: can you always optimise the space usage down from the ridiculous maximum and to what extent?

replies(1): >>44424702 #
3. mort96 ◴[] No.44424702[source]
But plenty of programs can only be optimized to take less space by making them run slower (i.e take more steps)...
replies(2): >>44424731 #>>44425039 #
4. eru ◴[] No.44424731{3}[source]
Yes. But even with infinite time there's a limit to how much you can compress the space usage. And that's an interesting property to study.

In addition you can look at how much extra time you actually need: infinite time is a vast overestimate. The new proof gives much more reasonable time bounds.

5. Ar-Curunir ◴[] No.44425039{3}[source]
That is also what is happening here. When you reduce space to sqrt you are increasing time complexity. The interesting part of the result is that you can make this tradeoff for _any_ problem.
6. svat ◴[] No.44425736[source]
> in a bunch of cases

That's precisely the issue: of course for many problems there are obvious ways to trade off space and time. Smart people who have spent their lives thinking about this are aware of this. :-) The issue here is: can this always be done? For an arbitrary computation that takes t time, can it always be simulated in much less space (using more time of course), no matter what the computation is? Lots of people have tried, and the best result until now was t/log t in the 1970s.

Edit: A comment on a previous discussion of this result by the author of the Quanta article, which substantiates and links to researchers' opinion that it's the universality (works for every problem) that's the surprising part: https://news.ycombinator.com/item?id=44075191