←back to thread

141 points vblanco | 3 comments | | HN request time: 0.001s | source
Show context
npalli ◴[] No.44437059[source]
Thanks to author for doing some solid work in providing data points for modules. For those like me looking for the headline metric, here it is in the conclusion

  While the evidence shown above is pretty clear that building a software package as a module provides the claimed benefits in terms of compile time (a reduction by around 10%, see Section 5.1.1) and perhaps better code structure (Section 5.1.4), the data shown in Section 5.1.2 also make clear that the effect on compile time of downstream projects is at best unclear. 
So, alas, underwhelming in this iteration and perhaps speaks to 'module-fication' of existing source code (deal.II, dates from the '90s I believe), rather than doing it from scratch. More work might be needed in structuring the source code into modules as I have known good speedup with just pch, forward decls etc. (more than 10%). Good data point and rich analysis, nevertheless.
replies(1): >>44437528 #
Someone ◴[] No.44437528[source]
It wouldn’t surprise me if they could do better if they gave up on doing most of the work programmatically.

One part of me agrees with (both from the paper)

> For example, putting a specific piece of code into the right place in each file (or adding necessary header files, as mentioned in Section 5.2) might take 20-30 seconds per file – but doing this for all 1051 files of deal.II then will take approximately a full day of (extremely boring) work. Similarly, individually annotating every class or function we want to export from a module is not feasible for a project of this size, even if from a conceptual perspective it would perhaps be the right thing to do.

and

> Given the size and scope of the library, it is clear that a whole-sale rewrite – or even just substantial modifications to each of its 652 header and 399 implementation files – is not feasible

but another part knows that spending a few days doing such ‘boring’ copy-paste work like that often has unexpected benefits; you get to know the code better and may discover better ways to organize the code.

Maybe, this project is too large for it, as checking that you didn’t mess up things by building the code and running the test suite simply takes too long, but even if it seems to be, isn’t that a good reason to try and get compile times down, so that working on the project becomes more enjoyable?

replies(1): >>44439068 #
jjmarr ◴[] No.44439068[source]
This is a great task for LLMs, honestly.
replies(2): >>44439602 #>>44440388 #
CJefferson ◴[] No.44439602[source]
I’ve tried doing things like this with LLMs (DeepSeek in my case). The thing which killed the whole thing is that can’t be trusted to cut+paste code — a clang warning informed me, when a 200 line function had been moved and slightly adjusted, a == was turned into a = deep inside an if statement. I only noticed as that is a fairly standard warning compilers give.

I wouldn’t mind a system where an LLM made instructions for a second system, which was a reliable code rearranging tool.

replies(2): >>44440247 #>>44443345 #
sysmax ◴[] No.44440247[source]
You can't trust LLMs to copy-paste code, but you can explicitly pick what should be editable, and also review the edits in a more streamlined way.

I am actually working on a GUI for just that [0]. The first problem is solved by having explicit links above functions and classes whether to include them in the context window (with an option to remove bodies of functions, just keeping the declarations). The second one is solved by a special review mode where it auto-collapses functions/classes that were unchanged, and having an outline window that shows how many blocks were changed in each function/class/etc.

The tool is still very early in development with tons of more functionality coming (like proper deep understanding of C/C++ code structure), but the code slicing and outline-based reviewing already works just fine. Also, works with DeepSeek, or any other model that can, well, complete conversations.

[0] https://codevroom.com/

replies(2): >>44440297 #>>44440394 #
zombot ◴[] No.44440394[source]
> review the edits

Or just do it yourself to begin with.

replies(1): >>44440478 #
sysmax ◴[] No.44440478[source]
It's just faster and less distracting. What is a total game-changer for me, is small refactoring. Let's say, you have a method that takes a boolean argument. At some point you realize you need a third value. You could replace it with an enum, but updating a handful of call sites is boring and terribly distracting.

With LLMs I can literally type "unsavedOnly => enum Scope{Unsaved, Saved, RecentlySaved (ignore for now)}" and that's it. It will replace the "bool unsavedOnly" argument with "scope Scope", update the check inside the method, and update the callers. If had to do it by hand each time, I would have lazied out and added another bool argument, or some other kind of a sloppy fix, snowballing the technical debt. But if LLMs can do all the legwork, you don't need sloppy fixes anymore. Keeping the code nice and clean doesn't mean a huge distraction and doesn't kick you out of the zone.

replies(1): >>44440564 #
1. hxbxbsbsn ◴[] No.44440564[source]
This is a standard use case which is better served by a deterministic refactoring tool
replies(2): >>44442733 #>>44444365 #
2. rerdavies ◴[] No.44442733[source]
This is a standard use case which, as far as I know, is not served by a deterministic refactoring tool.
3. sysmax ◴[] No.44444365[source]
Looked into it a lot. There are deterministic refactoring tools for things like convert for into foreach, or create constructor based on list of fields, but they still don't cover a lot of use cases.

I tried using a refactoring tool for reordering function arguments. The problem is, clicking through various GUI to get your point across is again too distracting. And there are still too many details. You can't say something like "new argument should be zero for callers that ignore the return value". It's not deterministic, and each case is slightly different from others. But LLMs handle this surprisingly well, and the mistakes they make are easy to spot.

What I'm really hoping to do some day is a "formal mode" where the LLM would write a mini-program to mutate the abstract syntax tree based on a textual refactoring request, thus guaranteeing determinism. But that's a whole new dimension of work, and there are numerous easier use cases to tackle before that.