←back to thread

218 points lapnect | 1 comments | | HN request time: 0.2s | source
Show context
romesmoke ◴[] No.42174962[source]
An interesting variation is Dynamic Storage Allocation, in which the rectangles can slide only alongside one axis.

AKA "static memory allocation", since the heights of the rectangles can be interpreted as buffer sizes, and their widths as lifetimes.

Most of my PhD's effort has been devoted to beating the SOTA in this. Problem's importance nowadays is owed to deep learning's memory wall.

replies(2): >>42175495 #>>42178981 #
auvi ◴[] No.42175495[source]
nice! you have your thesis or any related paper available online?
replies(3): >>42177050 #>>42179023 #>>42180210 #
almostgotcaught ◴[] No.42179023[source]
there are a bunch of papers on DSA from the OR people

http://adambuchsbaum.com/papers/dsa-stoc03.pdf

https://link.springer.com/chapter/10.1007/978-3-540-27798-9_...

https://users.cs.northwestern.edu/~pdinda/ics-s05/doc/dsa.pd...

there are also some from ML people trying to allocate memory optimally for DNNs:

https://arxiv.org/abs/1907.01989

https://arxiv.org/abs/1804.10001

https://arxiv.org/abs/2001.03288

they all boil down to a couple of greedy heuristics. the most recent "cool" paper was from a group at google

https://dl.acm.org/doi/10.1145/3567955.3567961

basic idea is to use both ILP and heurstics. i asked them to open source but no dice :(

replies(1): >>42179064 #
p0ckets ◴[] No.42179064[source]
An even more recent one from Google: https://github.com/google/minimalloc
replies(1): >>42179116 #
almostgotcaught ◴[] No.42179116[source]
i gave up on the problem about 18 months ago so i didn't keep up with the research area. is this yours? the runtimes are of course very good but i don't see a comparison on how good the approximation is vs telamalloc (or just ILP). i'll say this though: it's miraculuos that the impl is so small.
replies(1): >>42186127 #
1. p0ckets ◴[] No.42186127[source]
Not mine. I just happened to hear about it from a colleague recently.