←back to thread

128 points RGBCube | 3 comments | | HN request time: 0s | source
Show context
loa_in_ ◴[] No.44497772[source]
Automatically deriving Clone is a convenience. You can and should write your own implementation for Clone whenever you need it and automatically derived implementation is insufficient.
replies(3): >>44498124 #>>44499705 #>>44500307 #
josephg ◴[] No.44498124[source]
But this issue makes it confusing & surprising when an automatically derived clone is sufficient and when its not. Its a silly extra rule that you have to memorise.

By the way, this issue also affects all of the other derivable traits in std - including PartialEq, Debug and others. Manually deriving all this stuff - especially Debug - is needless pain. Especially as your structs change and you need to (or forget to) maintain all this stuff.

Elegant software is measured in the number of lines of code you didn't need to write.

replies(4): >>44498182 #>>44498258 #>>44498373 #>>44498385 #
j-pb ◴[] No.44498373[source]
I disagree; Elegant software is explicit. Tbh I wouldn't mind if we got rid of derives tomorrow. Given the ability of LLMs to generate and maintain all that boilerplate for you, I don't see a reason for having "barely enough smarts" heuristic solutions to this.

I rather have a simple and explicit language with a bit more typing, than a perl that tries to include 10.000 convenience hacks.

(Something like Uiua is ok too, but their tacitness comes from simplicity not convenience.)

Debug is a great example for this. Is derived debug convenient? Sure. Does it produce good error messages? No. How could it? Only you know what fields are important and how they should be presented. (maybe convert the binary fields to hex, or display the bitset as a bit matrix)

We're leaving so much elegance and beauty in software engineering on the table, just because we're lazy.

replies(2): >>44498536 #>>44500053 #
zwnow ◴[] No.44498536[source]
I am sorry but Uiua and LLM generated code? This has to be a shitpost
replies(2): >>44498734 #>>44500555 #
1. Intermernet ◴[] No.44498734[source]
Welcome to the new normal. Love it or hate it, there are now a bunch of devs who use LLMs for basically everything. Some are producing good stuff, but I worry that many don't understand the subtleties of the code they're shipping.
replies(1): >>44500668 #
2. j-pb ◴[] No.44500668[source]
For me the thing that convinced me was the ability to write so much more documentation, specification, tests and formal verification stuff than I could before, that the LLM basically has no choice but to build the right thing.

OpenAIs Codex model is also ridiculously capable compared to everything else, which helps a lot.

replies(1): >>44515856 #
3. josephg ◴[] No.44515856[source]
I’ve never tried to use it for formal verification. Does it work well for that? Is it smart enough to fix formal verification errors?

The place this approach falls down for me is in refactoring. Sure, you can get chatgpt to help you write a big program. But when I work like that, I don’t have the sort of insights needed to simplify the program and make it more elegant. Or if I missed some crucial feature that requires some of the code to be rewritten, chatgpt isn’t anywhere near as helpful. Especially if I don’t understand the code as well as I would have if I authored it from scratch myself.