←back to thread

105 points mathgenius | 1 comments | | HN request time: 0.208s | source
Show context
antics ◴[] No.43624766[source]
This is only tangentially related, but one thing that I am still kind of hopeful about is analytic combinatorics will free us from the tyranny of asymptotic analysis.

I suppose this is kind of a hot take, and in some ways less relevant than it was (say) 10 years ago, but I believe the theoretical CS community has spent a lot of time analyzing the performance of various algorithms over the years and kind of pointedly not inventing algorithms which accomplish some result at all, even if it is only known to be computable in a pathologically inefficient way. This means a lot of breakthroughs that could have been TCS breakthroughs have been invented, usually worse, by other communities.

I am not the only one who thinks so, here[1] is an obscure talk where Michael Mitzenmacher (a well-known CS theory prof at Harvard) makes more or less this specific complaint.

One of the nice things about Flajolet/Sedgewick[2] (mentioned in this post) is that it gives a nice set of combinators for (at least it seems to me, in theory) precisely enumerating how many "steps" an algorithm would take given some input. I optimistically imagine a world where we replace the relatively non-specific asymptotic bounds of an algorithm with a very specific quantification of the work an algorithm will take. And that this in turn would free TCS to focus on things that are more germane to the field as a whole.

With that all said I'm open to the argument that TCS has fundamentally changed and this is no longer such a big issue. But perusing the various TCS conferences I sense it isn't.

[1]: https://www.youtube.com/watch?v=p3anjSAnKBo

[2]: https://algo.inria.fr/flajolet/Publications/book.pdf

replies(8): >>43625074 #>>43625202 #>>43625358 #>>43625382 #>>43626260 #>>43627410 #>>43628547 #>>43637318 #
1. svat ◴[] No.43637318[source]
In fact in these slides from a 2011/2012 talk, Sedgewick talks about exactly this, even going as far as to draw a contrast between "TCS" and "analysis of algorithms" (the "scientific method"): https://sedgewick.io/talks/#algorithms-for-the-masses = https://sedgewick.io/wp-content/uploads/2022/03/2011-12AlgsM... onwards.

(Also touched upon briefly in https://sedgewick.io/talks/#lasting-legacy-of-flajolet = https://sedgewick.io/wp-content/uploads/2022/03/2013-14Flajo...)

His point is exactly that, often researchers care only about worst-case analysis and asymptotics, which is not necessarily a useful scientific analysis of how long an algorithm is expected to take on a machine as a function of the input size (which was the original motivation for Knuth etc to develop the field).

The same authors (but ordered as Sedgewick and Flajolet!) also have a book called An Introduction to the Analysis of Algorithms; there's a course website at https://aofa.cs.princeton.edu/home/ and videos on Coursera (https://www.coursera.org/learn/analysis-of-algorithms) — it is advanced mathematics but less advanced than the Analytic Combinatorics book.