Most active commenters
  • MarcelOlsz(12)
  • (9)
  • sarchertech(5)
  • RangerScience(4)
  • dartos(4)
  • skydhash(4)
  • eesmith(3)
  • dcchambers(3)
  • bunderbunder(3)
  • MrMcCall(3)

465 points 0x63_Problems | 249 comments | | HN request time: 3.742s | source | bottom
1. vander_elst ◴[] No.42138032[source]
"Companies with relatively young, high-quality codebases"

I thought that at the beginning the code might be a bit messy because there is the need to iterate fast and quality comes with time, what's the experience of the crowd on this?

replies(9): >>42138075 #>>42138094 #>>42138186 #>>42138274 #>>42138314 #>>42138387 #>>42138735 #>>42139575 #>>42144797 #
2. eesmith ◴[] No.42138045[source]
> human experts should do the work of refactoring legacy code until genAI can operate on it smoothly

How does one determine if that's even possible, much less estimate the work involved to get there?

After all, 'subtle control flow, long-range dependencies, and unexpected patterns' do not always indicate tech-debt.

replies(1): >>42138805 #
3. yuliyp ◴[] No.42138072[source]
This is just taking the advice to make code sane so that humans could undertand and modify it, and then justifying it as "AI should be able to understand and modify it". I mean, the same developer efficiency improvements apply to both humans and AI. The only difference is that currently humans working in a space eventually learn the gotchas, while current AIs don't really have that ability to learn the nuances of a particular space over time.
4. dkdbejwi383 ◴[] No.42138075[source]
I don't think there's such a thing as a single metric for quality - the code should do what is required at the time and scale. At the early stages, you can get away with inefficient things that are faster to develop and iterate on, then when you get to the scale where you have thousands of customers and find that your problem is data throughput or whatever, and not speed of iteration, you can break that apart and make a more complex beast of it.

You gotta make the right trade-off at the right time.

replies(1): >>42138224 #
5. perrygeo ◴[] No.42138092[source]
> Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases will struggle to adopt them. In other words, the penalty for having a ‘high-debt’ codebase is now larger than ever.

This mirrors my experience using LLMs on personal projects. They can provide good advice only to the extent that your project stays within the bounds of well-known patterns. As soon as your codebase gets a little bit "weird" (ie trying to do anything novel and interesting), the model chokes, starts hallucinating, and makes your job considerably harder.

Put another way, LLMs make the easy stuff easier, but royally screws up the hard stuff. The gap does appear to be widening, not shrinking. They work best where we need them the least.

replies(24): >>42138267 #>>42138350 #>>42138403 #>>42138537 #>>42138558 #>>42138582 #>>42138674 #>>42138683 #>>42138690 #>>42138884 #>>42139109 #>>42139189 #>>42140096 #>>42140476 #>>42140626 #>>42140809 #>>42140878 #>>42141658 #>>42141716 #>>42142239 #>>42142373 #>>42143688 #>>42143791 #>>42151146 #
6. skydhash ◴[] No.42138094[source]
Some frameworks like Laravel can bring you far in terms of features. You're mostly gluing stuff together on top of an high-quality codebase. It gets ugly when you need too add all the edge cases that every real-world use case entails. And suddenly you have hundreds of lines of if statements in one method.
7. dkdbejwi383 ◴[] No.42138113[source]
> However, in ‘high-debt’ environments with subtle control flow, long-range dependencies, and unexpected patterns, they struggle to generate a useful response

I'd argue that a lot of this is not "tech debt" but just signs of maturity in a codebase. Real world business requirements don't often map cleanly onto any given pattern. Over time codebases develop these "scars", little patches of weirdness. It's often tempting for the younger, less experienced engineer to declare this as tech debt or cruft or whatever, and that a full re-write is needed. Only to re-learn the lessons those scars taught in the first place.

replies(8): >>42138467 #>>42138490 #>>42138644 #>>42138759 #>>42139133 #>>42141484 #>>42142736 #>>42143702 #
8. ◴[] No.42138149[source]
9. nyrikki ◴[] No.42138186[source]
Purely depends on the ability for a culture that values leaving options open in the future develops or not.

Young companies tend to have systems that are small enough or with institutional knowledge to pivot when needed and tend to have small teams with good lines of communication that allow for as shared purpose and values.

Architectural erosion is a long tailed problem typically.

Large legacy companies that can avoid architectural erosion do better than some startups who don't actively target maintainability, but it tends to require stronger commitment from Leadership than most orgs can maintain.

In my experience most large companies confuse the need to maintain adaptability with a need to impose silly policies that are applied irrespective of the long term impacts.

Integration and disintegration drivers are too fluid, context sensitive, and long term for prescription at a central layer.

The possibility mythical Amazon API edict is an example where focusing on separation and product focus could work, with high costs if you never get to the scale where it pays off.

The runways and guardrails concept seems to be a good thing in the clients I have worked for.

10. amelius ◴[] No.42138212[source]
AI has a different "tech debt" issue.

Because with AI you can turn any problem into a black box. You build a model, and call it "solved". But then reality hits ...

replies(1): >>42138665 #
11. leptons ◴[] No.42138221[source]
I asked the AI to write me some code to get a list of all the objects in an S3 bucket. It returned some code that worked, it would no doubt be approved by most developers. But on further inspection I noticed that it would cause a bug if the bucket had more than 1000 objects because S3 only delivers 1000 max objects per request, and the API is paged, and the AI had no ability to understand this. So the AI's code would be buggy should the bucket contain more than 1000 objects, which is really, really easy to do with an S3 bucket.
replies(4): >>42138486 #>>42139105 #>>42139285 #>>42139303 #
12. nyrikki ◴[] No.42138224{3}[source]
This!

Active tradeoff analysis and a structure that allows for honest reflection on current needs is the holy grail.

Choices are rarely about what is best and are rather about finding the least worst option.

13. luckydata ◴[] No.42138234[source]
Bah this article is a bunch of nonsense. You're saying that a technology that has been around for a grand 2 years is not yet mature? Color me shocked.

I'm sure nothing will change in the future either.

replies(1): >>42138646 #
14. grahamj ◴[] No.42138238[source]
I agree with a lot of the assertions made in TFA but not so much the conclusion. AI increasing the velocity of simpler code doesn’t make tech debt more expensive, it just means it won’t benefit as much / be made cheaper.

OTOH if devs are getting the simpler stuff done faster maybe they have more time to work on debt.

15. RangerScience ◴[] No.42138267[source]
Eh, it’s been kinda nice to just hit tab-to-complete on things like formulaic (but comprehensive) test suites, etc.

I never wanted the LLM to take over the (fun) part - thinking through the hard/unusual parts of the problem - but you’re also not wrong that they’re needed the least for the boilerplate. It’s still nice :)

replies(2): >>42138489 #>>42138729 #
16. AnotherGoodName ◴[] No.42138274[source]
I find messiness often comes from capturing every possible edge case that a young codebase probably doesn’t do tbh.

A user deleted their account and there’s now a request to register that account with that username? We didn’t think of that (concerns from ux on imposter and abuse to be handled). Better code in a catch and handle this. Do this 100x times and you code has 100x custom branching logic that potentially interacts n^2 ways since each exceptional event could probably occur in conjunction with other exceptional events.

It’s why I caution strongly against rewrites. It’s easy to look at code and say it’s too complex for what it does but is the complexity actually needless? Can you think of a way to refactor the complexity out? If so do that refactor if not a rewrite won't solve it.

replies(1): >>42141613 #
17. rsynnott ◴[] No.42138280[source]
> Instead of trying to force genAI tools to tackle thorny issues in legacy codebases, human experts should do the work of refactoring legacy code until genAI can operate on it smoothly. When direct refactoring is still too risky, teams can adjust their development strategy with approaches like strangler fig to build greenfield modules which can benefit immediately from genAI tooling.

Or, y'know, just not bother with any of this bullshit. "We must rewrite everything so that CoPilot will sometimes give correct answers!" I mean, is this worth the effort? Why? This seems bonkers, on the face of it.

replies(1): >>42138485 #
18. happytoexplain ◴[] No.42138314[source]
A startup with talent theoretically follows that pattern. If you're not a startup, you don't need to go fast in the beginning. If you don't have talent in both your dev team and your management, the codebase will get worse over time. Every company can differ on those two variables, and their codebases will reflect that. Probably most companies are large and talent-starved, so they go slow, start out with good code, then get bad over time.
19. NitpickLawyer ◴[] No.42138330[source]
... until it won't. A mature code-base also has (or should have) strong test coverage, both in unit-testing and comprehensive integration testing. With proper ci/cd pipelines, you can have a small team update and upgrade stuff at a fraction of the usual cost (see amazon going from old java to newer versions) and "pay off" some of that debt.

The tooling for this will only improve.

replies(1): >>42138684 #
20. dcchambers ◴[] No.42138350[source]
Like most of us it appears LLMs really only want to work on greenfield projects.
replies(2): >>42138525 #>>42138978 #
21. RangerScience ◴[] No.42138387[source]
IME, “young” correlates with health b/c less time has been spent making it a mess… but, what’s really going on is the company’s culture and how it relates to quality work, aka, whether engineers are given the time to perform deep maintenance as the iteration concludes.

Maybe… to put it another way, it’s that time spent on quality isn’t time spent on discovery, but it’s only time spent on quality that gets you quality. So while a company is heavily focused on discovery - iteration, p/m fit, engineers figuring it out, etc - it’s not making a good codebase, and if they never carve out time to focus on quality, that won’t change.

That’s not entirely true - IMO, there’s a synergistic, not exclusionary relationship between the two - but it gets the idea across, I think.

22. dcchambers ◴[] No.42138389[source]
LLM code gen tools are really freaking good...at making the exact same react boilerplate app that everyone else has.

The moment you need to do something novel or complicated they choke up.

This is why I'm not very confident that tools like Vercel's v0 (https://v0.dev/) are useful for more than just playing around. It seems very impressive at first glance - but it's a mile wide and only an inch deep.

replies(2): >>42138440 #>>42138881 #
23. ◴[] No.42138403[source]
24. sheerun ◴[] No.42138406[source]
Good for us I guess?
25. holoduke ◴[] No.42138440[source]
If can you can create boilerplate code, logging, documentation, common algorithms by AI it saves you a lot of time which you can use on your specialized stuff. I am convinced that you can make yourself x2 by using an AI. Just use it in the proper way.
replies(6): >>42138509 #>>42138590 #>>42139129 #>>42139261 #>>42140602 #>>42144895 #
26. 42lux ◴[] No.42138446[source]
Microservices are back on the menu, boys.
27. bob1029 ◴[] No.42138457[source]
> Not only does a complex codebase make it harder for the model to generate a coherent response, it also makes it harder for the developer to formulate a coherent request.

> This experience has lead most developers to “watch and wait” for the tools to improve until they can handle ‘production-level’ complexity in software.

You will be waiting until the heat death of the universe.

If you are unable to articulate the exact nature of your problem, it won't ever matter how powerful the model is. Even a nuclear weapon will fail to have effect on target if you can't approximate its location.

Ideas like dumpstering all of the codebase into a gigantic context window seem insufficient, since the reason you are involved in the first place is because that heap is not doing what the customer wants it to do. It is currently a representation of where you don't want to be.

replies(1): >>42139585 #
28. Clubber ◴[] No.42138467[source]
I call them warts, but yes agree, especially an industry that does a lot of changing, for example a heavily regulated one.
29. Clubber ◴[] No.42138485[source]
>I mean, is this worth the effort? Why?

It doesn't matter, it's the new hotness. Look at scrum, how shit it is for software and for devs, yet it's absolutely everywhere.

Remember "move fast and break things?" Everyone started taking that as gospel and writing garbage code. It seems the industry is run by toddlers.

/rant

30. asabla ◴[] No.42138486[source]
at some extent I do agree with the point you're trying to make.

But unless you include pagination needs to be handled as well, the LLM will naively just implement the bare minimum.

Context matters. And supplying enough context is what makes all the difference when interacting with these kind of solutions.

replies(1): >>42139424 #
31. bunderbunder ◴[] No.42138490[source]
I recently watched a team speedrun this phenomenon in rather dramatic fashion. They released a ground-up rewrite of an existing service to much fanfare, talking about how much simpler it was than the old version. Only to spend the next year systematically restoring most of those pieces of complexity as whoever was on pager duty that week got to experience a high-pressure object lesson in why some design quirk of the original existed in the first place.

Fast forward to now and we're basically back to where we started. Only now they're working on code that was written in a different language, which I suppose is (to misappropriate a Royce quote) "worth something, but not much."

That said, this is also a great example of why I get so irritated with colleagues who believe it's possible for code to be "self-documenting" on anything larger than a micro-scale. That's what the original code tried to do, and it meant that its current maintainers were left without any frickin' clue why all those epicycles were in there. Sure, documentation can go stale, but even a slightly inaccurate accounting for the reason would have, at the very least, served as a clear reminder that a reason did indeed exist. Without that, there wasn't much to prevent them from falling into the perennially popular assumption that one's esteemed predecessors were idiots who had no clue what they were doing.

replies(4): >>42138653 #>>42138799 #>>42139332 #>>42143768 #
32. hyccupi ◴[] No.42138489{3}[source]
> It’s still nice :)

This is the thing about the kind of free advertising so many on this site provide for these llm corpos.

I’ve seen so many comparisons between “ai” and “stack overflow” that mirror this sentiment of “it’s still nice :)”.

Who’s laying off and replacing thousands of working staff for “still nice :)” or because of “stack overflow”?

Who’s hiring former alphabet agency heads to their board for “still nice :)”?

Who’s forcing these services into everything for “still nice :)”?

Who’s raising billions for “still nice :)”?

So while developers argue tooth and nail for these tools that they seemingly think everyone only sees through their personal lens of a “still nice :)” developer tool, the companies are leveraging that effort to oversell their product beyond the scope of “still nice :)”.

33. endemic ◴[] No.42138509{3}[source]
I feel like we should get rid of the boilerplate, rather than have an LLM barf it out.
replies(4): >>42138708 #>>42138721 #>>42138872 #>>42139816 #
34. hyccupi ◴[] No.42138525{3}[source]
Good joke, but the reality is they falter even more on truly greenfield projects.

See: https://news.ycombinator.com/item?id=42134602

replies(2): >>42138662 #>>42138664 #
35. TOGoS ◴[] No.42138537[source]
> They work best where we need them the least.

Just like most of the web frameworks and ORMs I've been forced to use over the years.

36. browningstreet ◴[] No.42138539[source]
I keep waiting for the pairing of coding LLMs with a programming language created specifically to be coupled with a coding LLM.
replies(2): >>42138649 #>>42143006 #
37. comboy ◴[] No.42138558[source]
Same experience, but I think it's going to change. As models get better, their context window keeps growing while mine stays the same.

To be clear, our context window can be really huge if you are living the project. But not if you are new to it or even getting back to it after a few years.

replies(1): >>42138736 #
38. graycat ◴[] No.42138582[source]
I suspected some of that, and your explanation looks more general and good.

Or, for a joke, LLMs plagiarize!

39. dcchambers ◴[] No.42138590{3}[source]
I guess that's a good way to think of it. Despite not being very useful (currently, anyway) for certain types of complicated or novel work - they still are very useful for other types of work and can help reduce development toil.
40. nimish ◴[] No.42138612[source]
Evergreen: https://static.googleusercontent.com/media/research.google.c...

Machine learning is the high interest credit card of technical debt.

replies(2): >>42138920 #>>42141483 #
41. phillipcarter ◴[] No.42138635[source]
Speaking personally, I've found this tech much more helpful in existing codebases than new ones.

Missing test? Great, I'll get help identifying what the code should be doing, then use AI to write a boatload of tests in service towards those goals. Then I'll use it to help refactor some of the code.

But unlike the article, this requires actively engaging with the tool rather than, as they say a "sit and wait" (i.e., lazy) approach to developing.

42. latortuga ◴[] No.42138644[source]
Louder for the people in the back. I've had this notion for quite a long time that "tech debt" is just another way to say "this code does things in ways I don't like". This is so well said, thank you!
replies(1): >>42139324 #
43. elforce002 ◴[] No.42138646[source]
According to Ilya Sutskever: "results from scaling up pre-training have plateaued".

https://www.reuters.com/technology/artificial-intelligence/o...

They're trying other techniques to improve what we already have atm.

replies(1): >>42138925 #
44. verdverm ◴[] No.42138649[source]
The problem is less the language and more what is written with any given language

The world is complex and we have to write a lot of code to capture that complexity. LLMs are good at the first 20% but balk at the 80% effort to match reality

45. mandevil ◴[] No.42138653{3}[source]
Hahaha, Joel Spolsky predicted exactly that IN THE YEAR 2000:

https://www.joelonsoftware.com/2000/04/06/things-you-should-...

replies(2): >>42139178 #>>42142700 #
46. MrMcCall ◴[] No.42138662{4}[source]
That is because, by definition, their models are based upon the past. And woe unto thee if that training data was not pristine. Error propagation is a feature; it's a part of the design, unless one is suuuuper careful. As some have said, "Fools rush in."
replies(1): >>42140455 #
47. anthonyskipper ◴[] No.42138664{4}[source]
I agree with this. But the reason is that AI does better the more constrained it is, and existing codebases come with constraints.

That said, if you are using Gen AI without a advanced rag system feeding it lots of constraints and patterns/templates I wish you luck.

48. verdverm ◴[] No.42138665[source]
This was what I thought the post would talk about before clicking through. AI adds tech debt because none of the people maintaining or operating the code wrote the code and are no longer familiar with their own implementation
replies(1): >>42140792 #
49. p0nce ◴[] No.42138671[source]
Code is not really lossy zipped text.
50. irrational ◴[] No.42138674[source]
I was recently assigned to work on a huge legacy ColdFusion backend service. I was very surprised at how useful AI was with code. It was even better, in my experience, than I've seen with python, java, or typescript. The only explanation I can come up with is there is so much legacy ColdFusion code out there that was used to train Copilot and whatever AI jetbrains uses for code completion that this is one of the languages they are most suited to assist with.
replies(4): >>42139225 #>>42139249 #>>42139393 #>>42139543 #
51. slt2021 ◴[] No.42138683[source]
maybe its a signal that you software should be restructured into modules that fit well-established patterns.

its like you are building website thats not using MVC and complain that LLM advice is garbage...

replies(1): >>42139787 #
52. elforce002 ◴[] No.42138684[source]
According to Ilya Sutskever: "results from scaling up pre-training have plateaued". https://www.reuters.com/technology/artificial-intelligence/o...

They're trying other techniques to improve what we already have atm, but we're almost at the limit of its capabilities.

53. anthonyskipper ◴[] No.42138690[source]
This is only partly true. AI works really well on very legacy codebases like cobol and mainframe, and it's very good at converting that to modern languages and architectures. It's all the stuff from like 2001-2015 that it gets weird on.
replies(1): >>42138720 #
54. JohnFen ◴[] No.42138708{4}[source]
Honestly, this bit about genAI being good at generating boilerplate is correct, but it always makes me wonder... is this really a thing that would save a ton of time? How much boilerplate are people writing? Only a small fraction of code that I write involves boilerplate.
replies(1): >>42143667 #
55. dartos ◴[] No.42138720{3}[source]
> AI works really well on very legacy codebases like cobol and mainframe

Any sources? Seems unlikely that LLMs would be good at something with so little training data in the widely available internet.

replies(1): >>42140055 #
56. lowbloodsugar ◴[] No.42138721{4}[source]
When I try to read code on GitHub that has the var or val keyword, I have no fucking idea what the types of the variables are. Sure, the compiler can infer, since it’s just ingested your entire code base, but I have a single page of text to look at.

Some boilerplate is good.

57. perrygeo ◴[] No.42138729{3}[source]
True, if you're using LLMs as a completion engine or to generate scaffolding it's still very useful! But we have to acknowledge that's by far the easiest part of programming. IDEs and deterministic dev tools have done that (very well) for decades.

The LLM gains are in efficiency for rote tasks, not solving the other hard problems that make up 98% of the day. The idea that LLMs are going to advance software in any substantial way seems implausible to me - It's an efficiency tool in the same category as other IDE features, an autocomplete search engine on steroids, not even remotely approaching AGI (yet).

replies(1): >>42143100 #
58. JohnFen ◴[] No.42138735[source]
> what's the experience of the crowd on this?

It's very hard to retrofit quality into existing code. It really should be there from the very start.

59. MrMcCall ◴[] No.42138736{3}[source]
Here's the secret to grokking a software project: a given codebase is not understandable without understanding how and why it was built; i.e. if you didn't build it, you're not going to understand why it is the way it is.

In theory, the codebase should be, as it is, understandable (and it is, with a great deal of rigorous study). In reality, that's simply not the case, not for any non-trivial software system.

replies(3): >>42139319 #>>42139451 #>>42142633 #
60. dartos ◴[] No.42138759[source]
Imo real tech debt is when the separation between business logic and implementation details get blurry.

Rewrites tend to focus all in on implementation.

61. lcnPylGDnU4H9OF ◴[] No.42138799{3}[source]
> Only to spend the next year systematically restoring most of those pieces of complexity as whoever was on pager duty that week got to experience a high-pressure object lesson in why some design quirk of the original existed in the first place.

Just to emphasize the point: even if it's not obvious why there is a line of code, it should at least be obvious that the line of code does something. It's important to find out what that something is and remember it for a refactor. At the very least, the knowledge could help you figure out a bug a day or two before you decide to pore over every line in the diff.

replies(1): >>42139010 #
62. svaha1728 ◴[] No.42138805[source]
As long as you can constrain your solution to the logic contained inside a Todo app, all is golden /s
63. benatkin ◴[] No.42138833[source]
The author starts with a straw man argument, of someone who thinks that AI is great at dealing with technical debt. He makes little attempt to steel man their argument. Then the author argues the opposite without much supporting evidence. I think the author is right that some people were quick to assume that AI is much better for brownfield projects, but I think the author was also quick to assume the opposite.
64. swatcoder ◴[] No.42138866[source]
> There is an emerging belief that AI will make tech debt less relevant.

Wow. It's hard to believe that people are earnestly supposing this. From everything we have evidence of so far, AI generated code is destined to be a prolific font of tech debt. It's irregular, inconsistent, highly sensitive to specific prompting and context inputs, and generally produces "make do" code at best. It can be extremely "cheap" vs traditional contributions, but gets to where it's going by the shortest path rather than the most forward-looking or comprehensive.

And so it does indeed work best with young projects where the prevailing tech debt load remains low enough that the project can absorb large additions of new debt and incoherence, but that's not to the advantage of young projects. It's setting those projects up to be young and debt-swamped much sooner than they would otherwise be.

If mature projects can't use generative AI as extensively, that's going to be to their advantage, not their detriment -- at least in terms of tech debt. They'll be forced to continue plodding along at their lumbering pace while competitors bloom and burst in cycles of rapid initial development followed by premature seizure/collapse.

And to be clear: AI generated code can have real value, but the framing of this article is bonkers.

replies(2): >>42142187 #>>42147507 #
65. Terr_ ◴[] No.42138872{4}[source]
Yeah, I often like to point out that our entire industry is already built on taking repeatable stuff and then abstracting it away.

Boilerplate code exists when the next step is often to start customizing it in a unique and unpredictable way.

66. shmoogy ◴[] No.42138881[source]
Most people don't do novel things, and those that do still have like 90% same business logic somebody else has done a million times over.
67. jamil7 ◴[] No.42138884[source]
> This mirrors my experience using LLMs on personal projects. They can provide good advice only to the extent that your project stays within the bounds of well-known patterns.

I agree but I find its still a great productivity boost for certain tasks, cutting through the hype and figuring out tasks that are well suited to these tools and prompting optimially has taken me a long time.

replies(1): >>42139011 #
68. morkalork ◴[] No.42138920[source]
This is funny in the context of seeing GCP try to deprecate a text embedded api and then push out the deadline by 6 months.
69. luckydata ◴[] No.42138925{3}[source]
and we plow through plateaus every 6 months, regularly, by inventing something new. I thought we were engineers, not some kind of amish cult.
70. ◴[] No.42138939[source]
71. benatkin ◴[] No.42138978{3}[source]
The site also suggests LLMs care a great deal one way or another.

"Unlock a codebase that your engineers and AI love."

https://www.gauge.sh/

I think they do often act opinionated and show some decision-making ability, so AI alignment really is important.

replies(1): >>42140471 #
72. r_hanz ◴[] No.42138998[source]
The title of this article made me think that paying down traditional tech debt due to bugs or whatever is straightforward. Software with tech debt and/or bugs that incorporates AI isn’t a straightforward rewrite, but takes ML skills to pay down.
73. mandevil ◴[] No.42139010{4}[source]
In my refactoring I always refer to that as Chesterton's Fence. Never remove something until you know why it was put in in the first place. Plenty of times it's because you were trying to support Python 3.8 or something else obsolete, and a whole lot of the time it's because you thought that the next project was going to be X so you tried to make that easy but X never got done so you have code to nowhere. Then feel free to refactor it, but a lot of the time it's because of good reasons that are NOT obsolete or overtaken by events, and when refactoring you need to be able to tell the difference.

https://www.chesterton.org/taking-a-fence-down/ has the full cite on the names.

replies(3): >>42140023 #>>42140212 #>>42141327 #
74. pydry ◴[] No.42139011{3}[source]
I hear people say this a lot but invariably the tasks end up being "things you shouldnt be doing".

E.g. pointing the AI at your code and getting it to write unit tests or writing more boilerplate, faster.

75. yawnxyz ◴[] No.42139105[source]
yeah AI isn't good at uncovering all the foot guns and corner cases, but I think this reflects most of StackOverflow, which (not coincidentally) also misses all of these
76. cheald ◴[] No.42139109[source]
The niche I've found for LLMs is for implementing individual functions and unit tests. I'll define an interface and a return (or a test name and expectation) and say "this is what I want this to do", and let the LLM take the first crack at it. Limiting the bounds of the problem to be solved does a pretty good job of at least scaffolding something out that I can then take to completion. I almost never end up taking the LLM's autocompletion at face value, but having it written out to review and tweak does save substantial amounts of time.

The other use case is targeted code review/improvement. "Suggest how I could improve this" fills a niche which is currently filled by linters, but can be more flexible and robust. It has its place.

The fundamental problem with LLMs is that they follow patterns, rather than doing any actual reasoning. This is essentially the observation made by the article; AI coding tools do a great job of following examples, but their usefulness is limited to the degree to which the problem to be solved maps to a followable example.

replies(3): >>42140322 #>>42143531 #>>42143847 #
77. kazinator ◴[] No.42139114[source]
> Companies with relatively young, high-quality codebases benefit the most from generative AI tools, while companies with gnarly, legacy codebases will struggle to adopt them.

So you say, but {citation needed}. Stuff like this is simply not known yet.

AI can easily be applied in legacy codebases, like to help with time-consuming refactoring.

78. alberth ◴[] No.42139126[source]
> AI makes tech debt more expensive

This isn't AI doing.

It's the doing of adding any new feature to a product with existing tech debt.

And since AI for most companies is a feature, like any feature, it only makes the tech debt worse.

79. guluarte ◴[] No.42139129{3}[source]
or you just can start with a well maintained boilerplate
80. kazinator ◴[] No.42139133[source]
Code that has thorough unit and integration tests, no matter how old and crusty, can be refactored with a good deal of confidence, and AI can help with that.
81. tux1968 ◴[] No.42139156[source]
This type of analysis is a mirror of the early days of chess "AI". All kinds of commentary explaining the weaknesses of the engines, and extolling the impossible-to-reproduce capabilities of human players. But while they may have been correct in the moment, they didn't really appreciate the march toward utter dominance and supremacy of the machines over human players.

While there is no guarantee that the same trajectory is true for programming, we need to heed how emotionally attached we can be to denying the possibility.

82. stego-tech ◴[] No.42139165[source]
While this primarily focuses on the software development side of things, I’d like to chime in that this applies to the IT side of the equation as well.

LLMs can’t understand why your firewall rules have strange forwards for ancient enterprise systems, nor can they “automate” Operations on legacy systems or custom implementations. The only way to fix those issues is to throw money and political will behind addressing technical debt in a permanent sense, which no organization seemingly wants to do.

These things aren’t silver bullets, and throwing more technology at an inherently political problem (tech debt) won’t ever solve it.

83. kazinator ◴[] No.42139178{4}[source]
Times have changed. Code now does acquire bugs just by sitting there. Assholes you depend on are changing language definitions, compiler behavior, and libraries in a massive effort concentrated on breaking your code. :)
replies(4): >>42140087 #>>42140153 #>>42140245 #>>42144174 #
84. cloverich ◴[] No.42139189[source]
For me same experience but opposite conclusion. LLM saves me time by being excellent at yak shaving, letting me focus on the things that truly need my attention.

It would be great if they were good at the hard stuff too, but if I had to pick, the basics is where i want them the most. My brain just really dislikes that stuff, and i find it challenging to stay focused and motivated on those things.

replies(2): >>42140617 #>>42141639 #
85. randomdata ◴[] No.42139225{3}[source]
Perhaps it is the reverse: That ColdFusion training sources are limited, so it is more likely to converge on a homogenization?

While, causally, we usually think of a programming language as being one thing, but in reality a programming language generally only specifies a syntax. All of the other features of a language emerge from the people using them. And because of that, two different people can end up speaking two completely different languages even when sharing the same syntax.

This is especially apparent when you witness someone who is familiar with programming in language X, who then starts learning language Y. You'll notice, at least at first, they will still try to write their programs in language X using Y syntax, instead of embracing language Y in all its glory. Now, multiply that by the millions of developers who will touch code in a popular language like Python, Java, or Typescript and things end up all over the place.

So while you might have a lot more code to train on overall, you need a lot more code for the LLM to be able to discern the different dialects that emerge out of the additional variety. Quantity doesn't imply quality.

replies(1): >>42139415 #
86. mdtancsa ◴[] No.42139249{3}[source]
similar experience with perl scripts being re-written into golang. Crazy good experience with Claude
87. croes ◴[] No.42139261{3}[source]
Tested code libraries save time, AI generated code saves time at writing but the review takes more time because it’s foreign code.
88. justincormack ◴[] No.42139285[source]
Claude did the simple version by default but I asked it to support more than 1000 and it did it fine
89. awkward ◴[] No.42139303[source]
Most AI code is kind of like that. It's sourced from demo quality examples and piecemeal paid work. The resulting code is focused on succinctly solving the problem in the prompt. Factoring and concerns external to making the demo work disappear first. Then any edge cases that might complicate the result get tossed.
90. mkleczek ◴[] No.42139318[source]
It is a self-reinforcing pattern: the easier it is to generate code, the more code is generated. The more code is generated, the bigger the cost of maintenance is (and the relationship is super-linear).

So every time we generate the same boilerplate we really do copy/paste adding to maintenance costs.

We are amazed looking at the code generation capabilities of LLMs forgetting the goal is to have less code - not more.

replies(1): >>42139365 #
91. thfuran ◴[] No.42139319{4}[source]
So your secret to understanding code is: Abandon hope all ye who enter here?
replies(2): >>42139724 #>>42142422 #
92. genidoi ◴[] No.42139324{3}[source]
There is a difference between "this code does things in ways I don't like" and "this code does things in ways nobody likes"
93. mdgrech23 ◴[] No.42139332{3}[source]
100% hear this and I know as a developer at a big company I have no say over the business side of things but there's probably something to be said for we should all push for clear logical business processes that make sense. Take something like a complicated offering of subscriptions, it's bad for the customer, it's bad for sales people, it's bad for customer support, honestly it's probably even bad for marketing. Keep things simple. But I suppose those complexities ultimately probably allow for greater revenue as it would allow for greater extraction of dollars per customer e.g. people who met this criteria are willing to pay more so we'll have this niche plan but like I outlined above at what cost? Are you even coming out ahead in the long run?
94. madeofpalk ◴[] No.42139365[source]
My experience is the opposite - I find large blobs of generated code to be daunting, so I tend to pretty quickly reject them and either write something smaller by hand, or reprompt (in one way for another) for less, easier to review code.
replies(2): >>42139414 #>>42140053 #
95. cpeterso ◴[] No.42139393{3}[source]
But where did these companies get the ColdFusion code for their training data? Since ColdFusion is an old language and used for backend services, how much ColdFusion code is open source and crawlable?
replies(2): >>42140959 #>>42141919 #
96. mkleczek ◴[] No.42139414{3}[source]
And what do you do with the generated code?

Do you package it in a reusable library so that you don't have to do the same prompting again?

Or rather - just because it is so easy to do - you don't bother?

If that's the later - that's exactly the pattern I am talking about.

97. cpeterso ◴[] No.42139415{4}[source]
I wonder what a language designed as a target for LLM-generated code would look like? What semantics and syntax would help the LLM generate code that is more likely to be correct and maintainable by humans?
replies(1): >>42143160 #
98. dijksterhuis ◴[] No.42139424{3}[source]
not parent, but

> I asked the AI to write me some code to get a list of all the objects in an S3 bucket

they didn’t ask for all the objects in the first returned page of the query

they asked for all the objects.

the necessary context is there.

LLMs are just on par with devs who don’t read tickets properly / don’t pay attention to the API they’re calling (i’ve had this exact case happen with someone in a previous team and it was a combination of both).

replies(1): >>42139792 #
99. gwervc ◴[] No.42139451{4}[source]
Too bad most projects don't document any of those decisions.
100. eqvinox ◴[] No.42139543{3}[source]
That's great, but a sample size of 1, and AI utility is also self-confirmation-biasing. If the AI stops providing useful output, you stop using it. It's like "what you're searching is always in the last place you look". After you recognize AI's limits, most people wouldn't keep trying to ask it to do things they've learned it can't do. But still, there's an area of things it does, and a (ok, fuzzy) boundary of its capabilities.

Basically, for any statement about AI helpfulness, you need to quantify how far it can help you. Depending on your personality, anything else is likely either always a success (if you have a positive outlook) or a failure (if you focus on the negative).

101. randomdata ◴[] No.42139575[source]
In my experience you need a high quality codebase to be able to iterate at maximum speed. Any time someone, myself included, thought they could cut corners to speed up iteration, it ended up slowing things down dramatically in the end.

Coding haphazardly can be a lot more thrilling, though! I certainly don't enjoy the process of maintaining high quality code. It is lovely in hindsight, but an awful slog in the moment. I suspect that is why startups often need to sacrifice quality: The aforementioned thrill is the motivation to build something that has a high probability of being a complete waste of time. It doesn't matter how fast you can theoretically iterate if you can't compel yourself to work on it.

replies(1): >>42140206 #
102. mkleczek ◴[] No.42139585[source]
Well, increasing temperature (ie. adding some more randomness) for sure is going to magically generate a solution the customer wants. Right? /s
103. marcosdumay ◴[] No.42139724{5}[source]
Oh, you will understand why things were built. It's inevitable.

And all of that understanding will come from people complaining about you fixing a bug.

104. btbuildem ◴[] No.42139742[source]
I recently started playing with OpenSCAD and CadQuery -- tried a variety of the commercial LLMs, they all fall on their face so hard, teeth go flying.

This is for tiny code snippets, hello-world size, stringing together some primitives to render relatively simple objects.

Turns out, if the codebase / framework is a bit obscure and poorly documented, even the genie can't help.

105. marcosdumay ◴[] No.42139787{3}[source]
No, you shouldn't restructure your software into highly-repetitive noise so that a dumb computer can guess what comes next.
replies(1): >>42141132 #
106. danielbln ◴[] No.42139792{4}[source]
LLMs differ though. Newest Claude just gave me a paginated solution without further prodding.

In other more obscure cases I just add the documentation to it's context and let it work based on that.

107. danenania ◴[] No.42139816{4}[source]
There's an inherent tradeoff here though. Beyond a certain complexity threshold, code that leans toward more boilerplate is generally much easier to understand and maintain than code that tries to DRY everything with layers of abstraction, indirection, and magic.
108. bunderbunder ◴[] No.42140023{5}[source]
Incidentally the person who really convinced me to stop trying to future-proof made a point along those lines. Not in the same language, but he basically pointed out that, in practice, future-proofing is usually just an extremely efficient way to litter your code with Chesterton's Fences.
109. munk-a ◴[] No.42140053{3}[source]
You are an excellent user of AI code generation - but your habit is absolutely not the norm and other developers will throw in paragraphs of AI slop mindlessly.
replies(1): >>42141501 #
110. true_religion ◴[] No.42140055{4}[source]
LLMs are good at taking the underlying structure of one medium and repeating it using another medium.
replies(1): >>42146828 #
111. 0xfeba ◴[] No.42140087{5}[source]
It acquires bugs, security flaws, and obsolescence from the operating system itself.
112. munk-a ◴[] No.42140096[source]
> Put another way, LLMs make the easy stuff easier, but royally screws up the hard stuff.

This is my experience with generation as well - but I still don't trust it for the easy stuff and thus the model ends up being a hindrance in all scenarios. It is much easier for me to comprehend something I'm actively writing so making sure a generative AI isn't hallucinating costs more than me just writing it myself in the first place.

113. acheong08 ◴[] No.42140153{5}[source]
Golang really is the best when it comes to backwards compatibility. I'm able to import dependencies from 14 years ago and have them work with 0 changes
114. Halan ◴[] No.42140195[source]
It is not just the code produced with code generation tools but also business logic using gen AI.

For example a RAG pipeline. People are rushing things to market that are not built to last. The likes of LangChain etc. offer little software engineering polishing. I wish there were a more mature enterprise framework. Spring AI is still in the making and Go is lagging behind.

115. RangerScience ◴[] No.42140206{3}[source]
> thought they could cut corners to speed up iteration

Anecdotally, I find you can get about 3 days of speed from cutting corners - after that, as you say, you get slowed down more than you got sped up. First day, you get massive speed from going haphazard; second day, you're running out of corners to cut, and on the third day you start running into problems you created for yourself on the first day.

replies(1): >>42140494 #
116. suzzer99 ◴[] No.42140212{5}[source]
I got really 'lucky' in that the first major project I ever worked on was future-proofed to high heaven, and I became the one to maintain that thing for a few years as none of the expected needs for multiple layers of future-proofing abstraction came to pass. Oh but if we ever wanted to switch from Oracle to Sybase, it would have been 30% easier with our database connection factory!

I never let that happen again.

replies(2): >>42144183 #>>42147304 #
117. lpapez ◴[] No.42140245{5}[source]
> Assholes you depend on are changing language definitions, compiler behavior, and libraries in a massive effort concentrated on breaking your code. :)

Big Open Source is plotting against the working class developer.

replies(1): >>42145889 #
118. MarcelOlsz ◴[] No.42140322{3}[source]
Can't tell you how much I love it for testing, it's basically the only thing I use it for. I now have a test suite that can rebuild my entire app from the ground up locally, and works in the cloud as well. It's a huge motivator actually to write a piece of code with the reward being the ability to send it to the LLM to create some tests and then seeing a nice stream of green checkmarks.
replies(3): >>42140464 #>>42140879 #>>42143641 #
119. yawnxyz ◴[] No.42140411[source]
I find AI most helpful with very specific, narrow commands (add a new variable to the logger, which means typescript and a bunch of other things need to be updated) and it can go off and do that. While it's doing that I'll be thinking about the next thing to be fixed already.

Asking it for higher level planning / architecture is just asking for pain

replies(1): >>42140738 #
120. Terr_ ◴[] No.42140455{5}[source]
Or, in comic form: https://www.smbc-comics.com/comic/rise-of-the-machines
121. beeflet ◴[] No.42140471{4}[source]
Remember to tip your LLM

https://minimaxir.com/2024/02/chatgpt-tips-analysis/

122. zer8k ◴[] No.42140476[source]
> the model chokes, starts hallucinating, and makes your job considerably harder.

Coincidentally this also happens with developers in unfamiliar territory.

replies(1): >>42142868 #
123. stahorn ◴[] No.42140494{4}[source]
A piece of advice I heard many years ago was to not be afraid to throw away code. I've actually used that advice from time to time. It's not really a waste of time to do a `git reset --hard master` if you wrote shit code, but while writing it, you figured out how you should have written the code.
replies(1): >>42143040 #
124. LittleTimothy ◴[] No.42140543[source]
>Instead of trying to force genAI tools to tackle thorny issues in legacy codebases, human experts should do the work of refactoring legacy code until genAI can operate on it smoothly

Instead of genAI doing the rubbish, boring, low status part of the job, you should do the bits of the job no one will reward you for, and then watch as your boss waxes lyrical about how genAI is amazing once you've done all the hard work for it?

It just feels like if you're re-directing your efforts to help the AI, because the AI isn't very good at actual complex coding tasks then... what's the benefit of AI in the first place? It's nice that it helps you with the easy bit, but the easy bit shouldn't be that much of your actual work and at the end of the day... it's easy?

This gives very similar vibes to: "I wanted machines to do all the soul crushing monotonous jobs so we would be free to go and paint and write books and fulfill our creative passions but instead we've created a machine to trivially create any art work but can't work a till"

125. kibwen ◴[] No.42140602{3}[source]
> I am convinced that you can make yourself x2 by using an AI.

This means you're getting paid 2x more, right?

...Right?

126. davidsainez ◴[] No.42140617{3}[source]
Yep, I'm building a dev tool that is based on this principle. Let me focus on the hard stuff, and offload the details to an AI in a principled manner. The current crop of AI dev tools seem to fall outside of this sweet spot: either they try to do everything, or act as a smarter code completion. Ideally I will spend more time domain modeling and less time "coding".
127. glouwbug ◴[] No.42140626[source]
Ironically enough I’ve always found LLMs work best when I don’t know what I’m doing
replies(2): >>42141007 #>>42141805 #
128. mrbombastic ◴[] No.42140632[source]
Is this based on a study or something? I just see a graph with no references. What am I missing here?
129. jvanderbot ◴[] No.42140637[source]
I cannot wait for the inevitable top-down backlash banning any use of AI tools.
130. tired_and_awake ◴[] No.42140706[source]
I love the way our SWE jobs are evolving. AI eating the simple stuff, generating more code but with harder to detect bugs... I'm serious, it feels that we can move faster with these tools but perhaps have to operate differently.

We are a long ways from automating our jobs away, instead our expertise evolves.

I suspect doctors go through a similar evolution as surgical methods are updated.

I would love to read or participate in the discussion of how to be strategic in this new world. Specifically, how to best utilize code generating tools as a SWE. I suppose I can wait a couple of years for new school SWEs to teach me, unless anyone is aware of content on this?

131. davidsainez ◴[] No.42140738[source]
Current gen AI is bad at high level planning. But I've found it useful in iterating on my ideas, sort of a rubberduck++. It helps to have a system prompt that is not overly agreeable
replies(1): >>42141564 #
132. mouse_ ◴[] No.42140779[source]
Don't make me tap the sign.

"GARBAGE IN -- GARBAGE OUT!!"

133. JasserInicide ◴[] No.42140792{3}[source]
Yeah the article is just another borderline-useless self-promotion piece.
134. yieldcrv ◴[] No.42140809[source]
as the context windows get larger and the UX for analyzing multiple files gets better, I’ve found them to be pretty good

But they still fail at devops because so many config scripts are at never versions than the training set

135. ◴[] No.42140878[source]
136. highfrequency ◴[] No.42140879{4}[source]
> I now have a test suite that can rebuild my entire app from the ground up

What does this mean?

replies(1): >>42141059 #
137. singingfish ◴[] No.42140908[source]
Today's job is finishing up and testing some rather gnarly haproxy configuration. There's already a fairly high chance I'm going to stuff something up with it. There is no chance that I'm giving some other entity that chance as well.
138. PaulHoule ◴[] No.42140932{5}[source]
I had Codeium add something to a function that added a new data value to an object. Unbidden it wrote three new tests, good tests. I wrote my own test by cutting and pasting a test it wrote with a modification, it pointed out that I didn’t edit the comment so I told it to do so.

It also screwed up the imports of my tests pretty bad, some imports that worked before got changed for no good reason. It replaced the JetBrains NotNull annotation with a totally different annotation.

It was able to figure out how to update a DAO object when I added a new field. It got the type of the field wrong when updating the object corresponding to a row in that database column even though it wrote the liquibase migration and should have known the type —- we had chatted plenty about that migration.

It got many things right but I had to fix a lot of mistakes. It is not clear that it really saves time.

replies(2): >>42141053 #>>42141069 #
139. irrational ◴[] No.42140959{4}[source]
That's a good question. I presume there is some way to check github for how much code in each language is available on it.
140. mindok ◴[] No.42141007{3}[source]
Is that because you can’t judge the quality of the output?
replies(1): >>42150194 #
141. imp0cat ◴[] No.42141053{6}[source]
Let's be clear here, Codeium kinda sucks. Yeah, it's free and it works, somewhat. But I wouldn't trust it much.
142. MarcelOlsz ◴[] No.42141059{5}[source]
Sorry, should have been more clear. Firebase is (or was) a PITA when I started the app I'm working on a few years ago. I have a lot of records in my db that I need to validate after normalizing the data. I used to have an admin page that spit out a bunch of json data with some basic filtering and self-rolled testing that I could verify at a glance.

After a few years off from this project, I refactored it all, and part of that refactoring was building a test suite that I can run. When ran, it will rebuild, normalize, and verify all the data in my app (scraped data).

When I deploy, it will also run these tests and then email if something breaks, but skip the seeding portion.

I had plans to do this before but the firebase emulator still had a lot of issues a few years ago, and refactoring this project gave me the freedom to finally build a proper testing environment and make my entire app make full use of my local firebase emulator without issue.

I like giving it my test cases in plain english. It still gets them wrong sometimes but 90% of the time they are good to go.

143. MarcelOlsz ◴[] No.42141069{6}[source]
Try using Cursor with the latest claude-3-5-sonnet-20241022.
replies(2): >>42141154 #>>42148341 #
144. MarcelOlsz ◴[] No.42141096{5}[source]
It's containerized and I have a script that takes care of everything from the ground up :) I've tested this on multiple OS' and friends computers. I'm thankful to old me for writing a readme for current me lol.

>Please tell me this is satire.

No. I started doing TDD. It's fun to think about a piece of functionality, write out some tests, and then slowly make it pass. Removes a lot of cognitive load for me and serves as a micro todo. It's also nice that when you're working and think of something to add, you can just write out a quick test for it and add it to kanban later.

I can't tell you how many times I've worked on projects that are gigantic in complexity and don't do any testing, or use typescript, or both. You're always like 25% paranoid about everything you do and it's the worst.

replies(1): >>42145900 #
145. slt2021 ◴[] No.42141132{4}[source]
I am proponent of Clean and Simple architecture that follows standard patterns.

because they are easier to maintain, there should be no clever tricks or arch.

all software arch should be boring and simple, with as few tricks as possible, unless it is absolutely warranted

replies(2): >>42142588 #>>42144194 #
146. PaulHoule ◴[] No.42141154{7}[source]
Unfortunately I “think different” and use Windows. I use Microsoft Copilot and would say it is qualitatively similar to codeium in quality, a real quantitative eval would be a lot of work.
replies(1): >>42141165 #
147. MarcelOlsz ◴[] No.42141165{8}[source]
Cursor (cursor.com) is just a vscode wrapper, should work fine with Windows. If you're already in the AI coding space I seriously urge you to at least give it a go.
replies(1): >>42141385 #
148. mywittyname ◴[] No.42141327{5}[source]
This can be a huge road block. Even if the developer who wrote the code is still around, there's no telling if they will even remember writing that line, or why they did so. But in most projects, that original developer is going to be long gone.

I leave myself notes when I do bug fixes for this exact reason.

149. PaulHoule ◴[] No.42141385{9}[source]
I'll look into it.

I'll add that my experience with the Codium plugin for IntelliJ is night and day different from the Windsurf editor from Codium.

The first one "just doesn't work" and struggles to see files that are in my project, the second basically works.

replies(1): >>42141405 #
150. MarcelOlsz ◴[] No.42141405{10}[source]
You can also look into https://www.greptile.com/ to ask codebase questions. There's so many AI coding tools out there now. I've heard good things about https://codebuddy.ca/ as well (for IntelliJ) and https://www.continue.dev/ (also for IntelliJ).

>The first one "just doesn't work"

Haha. You're on a roll.

151. nicce ◴[] No.42141484[source]
> Over time codebases develop these "scars", little patches of weirdness. It's often tempting for the younger, less experienced engineer to declare this as tech debt or cruft or whatever, and that a full re-write is needed. Only to re-learn the lessons those scars taught in the first place.

Do you have an opinion when this maturity is too mature?

Let's say, you would need to add a major feature that would drastically change the existing code base. On top of that, by changing the language, this major feature would be effortless to add.

When it is worth to fight with scars or just rewrite?

152. c_moscardi ◴[] No.42141483[source]
Came to post this — it’s the same underlying technology, just a lot more compute now.
153. ◴[] No.42141501{4}[source]
154. yawnxyz ◴[] No.42141564{3}[source]
yes! It's definitely talked out of making some really dumb decisions
155. unregistereddev ◴[] No.42141613{3}[source]
I agree. New codebases are clean because they don't have all the warts of accumulated edge cases.

If the new codebase is messy because the team is moving fast as parent describes, that means the dev team is doing sloppy work in order to move fast. That type of speed is very short lived, because it's a lot harder to add 100 bugfixes to an already-messy codebase.

156. imiric ◴[] No.42141639{3}[source]
> LLM saves me time by being excellent at yak shaving, letting me focus on the things that truly need my attention.

But these tools often don't generate working, let alone bug-free, code. Even for simple things, you still need to review and fix it, or waste time re-prompting them. All this takes time and effort, so I wonder how much time you're actually saving in the long run.

157. archy_ ◴[] No.42141658[source]
Ive noticed the same and wonder if this is the natural result of public codebases on average being simpler since small projects will always outnumber bigger ones (at least if you ignore forks with zero new commits)

If high quality closed off codebases were used in training, would we see an improvement in LLM quality for more complex use cases?

158. paulsutter ◴[] No.42141671[source]
True if you’re using AI the wrong way. AI means dramatically less code, most of which is generated.

Creating react pages is the new COBOL

159. yodsanklai ◴[] No.42141716[source]
I use ChatGPT the most when I need to make a small change in a language I'm not fluent in, but I have a clear understanding of the project and what I'm trying to do. Example: "write a function that does this and this in Javascript". It's essentially a replacement of stack overflow.

I never use it for something that really requires knowledge of the code base, so the quality of the code base doesn't really matter. Also, I don't think it has ever provided me something I wouldn't have been able to do myself pretty quickly.

160. squillion ◴[] No.42141743[source]
It's funny that his recommendations - organize code in modules etc. - are nothing AI-specific, it's what you'd do if you had to handover your project to an external team, or simply make it maintainable in the long term. So the best strategy for collaborating with AI turns out to be the same as for collaborating with humans.

I completely agree. That's why my stance is to wait and see, and in the meanwhile get our shit together, as in make our code maintainable by any intelligent being, human or not.

161. hambandit ◴[] No.42141805{3}[source]
I find this perspective both scary and exciting. I'm curious, how do you validate the LLM's output? If you have a way to do this, and it's working. Then that's amazing. If you don't, how are you gauging "work best"?
replies(1): >>42150241 #
162. sitzkrieg ◴[] No.42141868[source]
are LLMs even auditable?
163. ssalka ◴[] No.42141892[source]
Yeah, this is a total click-bait article. The claim put forth by the title is not at all supported by the article contents, which basically states "old codebases riddled with tech-debt do not benefit very much from GenAI, while newer cleaner codebases will see more benefit." That is so completely far off from "AI will make your tech debt worse."
164. PeterisP ◴[] No.42141919{4}[source]
I'm definitely assuming that they don't limit their training data to what is open source and crawlable.
165. inSenCite ◴[] No.42142037[source]
On one hand I agree with this conceptually, but on the other hand I've also been able to use AI to rapidly clean up and better structure a bunch of my existing code.

The blind copy-paste has generally been a bad idea though. Still need to read the code spit out, ask for explanations, do some iterating.

replies(3): >>42142460 #>>42142747 #>>42143257 #
166. pphysch ◴[] No.42142187[source]
The mainstream layman/MBA view is that "AI/nocode will replace the programmers". Most actual programmers know better, of course.
167. fny ◴[] No.42142239[source]
> They work best where we need them the least.

Au contraire. I hate writing boilerplate. I hate digging through APIs. I hate typing the same damn thing over and over again.

The easy stuff is mind numbing. The hard stuff is fun.

replies(1): >>42142519 #
168. kemiller ◴[] No.42142373[source]
This is true, but I look at it differently. It makes it easier to automate the boring or annoying. Gotta throw up an admin interface? Need to write more unit tests? Need a one-off but complicated SQL query? They tend to excel at these things, and it makes me more likely to do them, while keeping my best attention for the things that really need me.
169. ImaCake ◴[] No.42142408[source]
> In essence, the goal should be to unblock your AI tools as much as possible. One reliable way to do this is to spend time breaking your system down into cohesive and coherent modules, each interacting through an explicit interface.

I find this works because its much easier to debug a subtle GPT bug in a well validated interface than the same bug buried in a nested for loop somewhere.

170. MrMcCall ◴[] No.42142422{5}[source]
Hard work is no secret, it's just avoided by slackers at all costs :-)

What I'm really saying is that our software development software is missing a very important dimension.

171. ImaCake ◴[] No.42142460[source]
Yeah LLMs are pretty good at doing things like moving a lambda function to the right spot or refactoring two overlapping classes to a base class. Often it only saves five minutes but that adds up over time.
172. skydhash ◴[] No.42142519{3}[source]
You write these once (or zero time) by using a scaffolding template, a generator, or snippets.
replies(1): >>42143658 #
173. honestAbe22 ◴[] No.42142579[source]
This isn't tech debt this is ignorece debt and lazyness debt by hiring incomptence
174. skydhash ◴[] No.42142588{5}[source]
Simplicity is hard. And difficulty is what almost everyone using LLMs is trying to avoid. More code breed complexity.

I read somewhere that 1/6 of the time should be allocated to refactoring (every 6th cycle). I wonder how that should be done with LLMs.

replies(1): >>42143084 #
175. xnx ◴[] No.42142633{4}[source]
LLMs might be a good argument for documenting more of the "why" in code comments.
176. ◴[] No.42142700{4}[source]
177. hn_throwaway_99 ◴[] No.42142736[source]
There is a pretty well known essay by Joel Spolsky (which is now 24 years old!) titled "Things You Should Never Do" where he talks about the error of doing a rewrite: https://www.joelonsoftware.com/2000/04/06/things-you-should-... . While I don't necessarily agree with all of his positions here, and given the way most software is architected and deployed these days some of this advice is just obsolete (e.g. relatively little software is complete, client-side binaries where his advice is more relevant), I think he makes some fantastic points. This part is particularly aligned with what you are saying:

> Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

> Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

> When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

replies(1): >>42145747 #
178. whazor ◴[] No.42142747[source]
Imagine a single file full of complicated logic, where messing with one if statement might cause serious bugs. Here an AI will likely struggle, whereas a human could spend a couple of hours trying to work out the connections.

But if you have a code base with predictable software architectural patterns, the AI will likely recognise and help with all the boilerplate.

Of course there is a lot of middle ground between bad and good.

179. sfpotter ◴[] No.42142808[source]
Haven't read the article, don't need to read the article: this is so, SO, so painfully obvious! If someone needs this spelled out for them they shouldn't be making technical decisions of any kind. Sad that this needs to be said.
180. perrygeo ◴[] No.42142868{3}[source]
I often think of LLMs as a really smart junior developer - full of answers, half correct, with zero wisdom but 100% confidence

I'd like to think most developers know how to say "I don't know, let's do some research" but in reality, many probably just take a similar approach to the LLM - feign competence and just hack out whatever is needed for today's goal, don't worry about tomorrow.

replies(1): >>42151916 #
181. vitiral ◴[] No.42143006[source]
Ever heard of LISP?

http://jmc.stanford.edu/articles/lisp.html

> This paper concentrates on the development of the basic ideas of LISP... when the programming language was implemented and applied to problems of artificial intelligence.

182. Groxx ◴[] No.42143040{5}[source]
Very much yes.

There's little reason to try to go straight for the final product when you don't know exactly how to get there, and that's frequently the case. Build toys to learn what you need efficiently, toss them, and then build the real thing. Trying to shoot for the final product while also changing direction multiple times along the way tends to create code with multiple conflicting goals subtly encoded in it, and it'll just confuse you and others later.

replies(1): >>42150295 #
183. valenterry ◴[] No.42143084{6}[source]
Exactly that. LLMs generate a lot of simple and dumb code fast. Then you need to refactor it and you can't because LLMs are still very bad at that. They can only refactor locally with a very limited scope, not globally.

Good luck to anyone having to maintain legacy LLM-generated codebases in the future, I won't.

replies(1): >>42146835 #
184. j45 ◴[] No.42143091[source]
Coding with AI could easily be a new form of early software/developer tech debt. Taking leaps that are too big, or too small, can be unexpected.
185. valenterry ◴[] No.42143100{4}[source]
> The idea that LLMs are going to advance software in any substantial way seems implausible to me

I disagree. They won't do that for existing developers. But they will make it so that tech-savy people will be able to do much more. And they might even make it so that one-off customization per person will become feasable.

Imagine you want to sort hackernews comments by number of character inline in your browser. Tell the AI to add this feature and maybe it will work (just for you). That's some ways I can see substantial changes happen in the future.

186. eru ◴[] No.42143160{5}[source]
Perhaps something like Cobol? (Shudder.)
187. physicles ◴[] No.42143257[source]
Do you mind getting into specifics about how you've been using AI to restructure your code? What tools are you using, and how large is the code base you're working with?
replies(1): >>42153961 #
188. senectus1 ◴[] No.42143304[source]
Not sure its tech debt as such, its the hidden cost of having to maintain AI tech. its not a static state.. and its got an ongoing maint cost.
189. acrooks ◴[] No.42143531{3}[source]
Yes this is the same for me. I’ve shifted my programming style so now I just write function signatures and let the AI do the rest for me. It has been a dream and works consistently well.

I’ll also often add hints at the top of the file in the form of comments or sample data to help keep it on the right track.

replies(1): >>42145308 #
190. anon-3988 ◴[] No.42143589[source]
AI code is just a more available SO code. You don't use the code handed to you, you learn from it.
191. rr808 ◴[] No.42143641{4}[source]
I struggle to get github copilot to create any unit tests that provide any value. How to you get it to create really useful tests?
replies(2): >>42144840 #>>42144903 #
192. fendy3002 ◴[] No.42143658{4}[source]
And now LLM write these for me, it's relaxing
193. tisdadd ◴[] No.42143667{5}[source]
I just tend to use am extension such as https://marketplace.visualstudio.com/items?itemName=Huuums.v... for my boilerplate, as I can customize along the way for the project and not think hard. I have seen a lot of younger devs not using such a thing or already existing CLI and instead copy paste then rename, or try writing from scratch every time but slight differences... It is weird to me how many don't look for ways to automate boilerplate, as it has always been my default.
194. LargeWu ◴[] No.42143688[source]
One description of the class of problems LLM's are a good fit for is anything at which you could throw an army of interns. And this seems consistent with that.
195. kerkeslager ◴[] No.42143702[source]
Put another way, sometimes code is complex because it has to be.
196. Izkata ◴[] No.42143768{3}[source]
> Sure, documentation can go stale, but even a slightly inaccurate accounting for the reason would have, at the very least, served as a clear reminder that a reason did indeed exist.

Which is borderline the reason for version control: Do a git/svn blame on that line, find what commit it was added, and see what the commit message was. Bonus points if it links to a case on a system you still use. Sure the commit message can be useless, but it's at least something you're forced to enter when committing code, rather than external documentation that can be missed and now be misleading. Version control can even show you that codebase at time that change was made so you can see it in context (which has saved me a few times, showing what something was added for so I could confirm a suspicion).

197. antonvs ◴[] No.42143791[source]
> They work best where we need them the least.

I disagree, but it’s largely a matter of expectations. I don’t expect them to solve hard problems for me. That’s currently still my job. But when I’m writing new code, even for a legacy system, they can save a lot of time in getting the initial coding done, helping write comments, unit tests, and so on.

It’s not doing difficult work, but it saves a lot of toil.

198. wordofx ◴[] No.42143828[source]
I enjoy reading these articles and reading comments from people who clearly have no idea how to use AI or it’s abilities.
199. nox101 ◴[] No.42143847{3}[source]
Can you give some examples? What LLM? What code? What tests?

As a test I just asked "ChatGPT 4o with canvas" to "Can you write a set of tests to test glBufferData and all of its edge cases?"

glBufferData is a 32 year old API so there's clearly plenty of examples for to have looked it. There are even multiple public tests for it including the official tests that are open sources and so easily scannable. It failed

It wrote 8 tests, 7 of those tests were wrong in that it did something wrong intentionally then asserted it go no error. It wasn't close to comprehensive. It didn't test the function actually put data in the buffer for example, nor did it check the set of valid enums to see that they work. Nor did it check that the target parameter actually works and affects the correct buffer bound to that target.

This is my experience with LLMs for code so far. I do get answers quicker from LLMs sometimes for tech questions vs searching via Google and reading stack overflow. But that's only sometimes. As a recent example, I was trying to add TypeScript types some JavaScript and it failed. I went round and round tell it it failed but it got stuck in a loop and just kept saying "Oh, sorry. How about this -- repeat of previous code"

replies(2): >>42144893 #>>42145945 #
200. kerkeslager ◴[] No.42144174{5}[source]
In general, when people say this sort of thing, if I dig into what exactly they're doing I discover they're importing half of npm/pypi/etc.

My code doesn't acquire bugs by sitting there in 2024 any more than it did in 2004. On most projects these days I'm using Django + Preact + HTM. Preact and HTM get loaded from static files by my root Django template. My PyPi dependencies are pinned to specific versions, and usually I have <10 (usually it's just Django and Django REST framework, sometimes it's even just Django).

201. dasil003 ◴[] No.42144183{6}[source]
> ...if we ever wanted to switch from Oracle to Sybase...

Yeah like Oracle would ever let that happen

202. lmm ◴[] No.42144194{5}[source]
A pattern is a structured way of working around a language deficiency. Good code does not need patterns or architecture, it expresses the essence of the actual business problem and no more. Such software is also significantly easier to maintain if you measure maintainability against how much functionality the software implements rather than how many lines of code it is. Unfortunately the latter is very common, and there is probably a bright future in using LLMs to edit masses of LLM-copy-pasted code as a poor man's substitute for doing it right.
203. TechDebtDevin ◴[] No.42144205[source]
And well duh
204. heisenbit ◴[] No.42144585[source]
A hard choice: Tune your code to unique customer requirements or keep it generic to please your AI.
205. byyoung3 ◴[] No.42144607[source]
"Companies with relatively young, high-quality codebases benefit the most from generative AI tools" - this is not true

The codebases that use the MOST COMMONLY USED LIBRARIES benefit the most from generative AI tools

replies(1): >>42145337 #
206. torginus ◴[] No.42144797[source]
My experience is that once success comes, business decides to quickly scale up the company - tons of people are hired, with most of the not having any experience with the hoot (or indeed give a hoot). Rigid management structures are created, inhabited by social climbers. A lot of the original devs leave etc.

That's the point when a ton of disinterested, inexperienced, and less handpicked people start pushing code in - driven not by the need to build good software, but to close jira tickets.

This invariably results in stagnating productivity at best, and upper management wondering why they are often not delivering on the pre-expansion level, let alone one that would be expected of 3x the headcount.

207. BillyTheKing ◴[] No.42144840{5}[source]
Would recommend to try out anthropic sonnet 3.5 for this one - usually generates decent unit tests for reasonably sized functions
208. wruza ◴[] No.42144893{4}[source]
Wait, wait. You ought to write tests for javascript react html form validation boilerplate. Not that.

/s aside, it’s what we all experience too. There’s a great divide between programming pre-around-2015 and thereafter. LLMs can only do recent programming, which is a product of tons of money getting loaded into the industry and creating jobs that made no sense ten years ago. Basically, the more repetitive boilerplate patterns configuration options import blocks row-obj-dto-obj conversion typecheck bullshit you write per day, the more LLMs help. I mean, one could abstract all that away using regular programming, but how would they sell their work for $^6 an AI for $^9 then?

Just yesterday, after reading yet another “oh you must try again” comment, I asked 4o about how to stop puppeteer from dumping errors into console and exit gracefully when I close the headful browser (all logs and code provided). Right away it slided into nonsense. I always finish my chats with what I think about it uncut, just in case someone uses these for further learning.

209. xarope ◴[] No.42144895{3}[source]
I hate to say this, but you probably can only achieve x2 using AI on the "easy" parts of your work. Going by the 80/20 rule:

  - 80% of your work is easy, and is accomplished in 20% of the time
  - 20% of your work is hard, and takes 80% of the time
If you believe AI can x2 that easy 80% of your work, you have only managed to reduce that 20% to 10%. Someone else can work out that x improvement (1.11?), but it's nowhere near x2.
210. MarcelOlsz ◴[] No.42144903{5}[source]
I use claude-3-5-sonnet-20241022 with a very explicit .cursorrules file with the cursor editor.
replies(1): >>42145845 #
211. teapot7 ◴[] No.42144946[source]
"A product should be owned by a lean team of experts, focused primarily on the architecture of their code rather than the implementation details."

Sheesh! The Lizard People walk among us.

212. Sparkyte ◴[] No.42145085[source]
AI is a tool and nothing more. You give it too much and it will fumble, humans fumble but we can self correct where instead AI hallucinates. Crazy nightmare AI dreams.
213. eesmith ◴[] No.42145308{4}[source]
Here's one I wrote the other day which took a long time to get right. I'm curious on how well your AI can do, since I can't imagine it does a good job at it.

  # Given a data set of size `size' >= 0, and a `text` string describing
  # the subset size, return a 2-element tuple containing a text string
  # describing the complement size and the actual size as an integer. The
  # text string can be in one of four forms (after stripping leading and
  # trailing whitespace):
  #
  #  1) the empty string, in which case return ("", 0)
  #  2) a stringified integer, like "123", where 0 <= n <= size, in
  #   which case return (str(size-int(n)), size-int(n))
  #  3) a stringified decimal value like "0.25" where 0 <= x <= 1.0, in
  #   which case compute the complement string as str(1 - x) and
  #   the complement size as size - (int(x * size)). Exponential
  #   notation is not supported, only numbers like "3.0", ".4", and "3.14"
  #  4) a stringified fraction value like "1/3", where 0 <= x <= 1,
  #   in which case compute the complement string and value as #3
  #   but using a fraction instead of a decimal. Note that "1/2" of
  #   51 must return ("1/2", 26), not ("1/2", 25).
  #
  # Otherwise, return ("error", -1)

  def get_complement(text: str, size: int) -> tuple[str, int]:
    ...

For examples:

  get_complement("1/2", 100) == ("1/2", 50)
  get_complement("0.6", 100) == ("0.4", 40)
  get_complement("100", 100) == ("0", 0)
  get_complement("0/1", 100) == ("1/1", 100)
Some of the harder test cases I came up were:

get_complement("0.8158557553804697", 448_525_430): this tests the underlying system uses decimal.Decimal rather than a float, because float64 ends up on a 0.5 boundary and applies round-half-even resulting in a different value than the true decimal calculation, which does not end up with a 0.5. (The value is "365932053.4999999857944710")

get_complement("nan", 100): this is a valid decimal.Decimal but not allowed by the spec.

get_complement("1/0", 100): handle division-by-zero in fractions.Fraction

get_complement("0.", 100): this tests that the string complement is "1." or "1.0" and not "1"

get_complement("0.999999999999999", 100): this tests the complement is "0.000000000000001" and not "1E-15".

get_complement("0.5E0", 100): test that decimal parsing isn't simply done by decimal.Decimal(size) wrapped in an exception handler.

Also, this isn't the full spec. The real code reports parse errors (like recognizing the "1/" is an incomplete fraction) and if the value is out of range it uses the range boundary (so "-0.4" for input is treated as "0.0" and the complement is "1.0"), along with an error flag so the GUI can display the error message appropriately.

replies(1): >>42145682 #
214. 0xpgm ◴[] No.42145337[source]
True. Also, the LLM will give you the most widely deployed versions encountered in the wild (during training).

That means one might find themselves using deprecated but still supported features.

If LLMs came out during the Python 2/3 schism for example, they'd be generating an ever increasing pile of Python 2 code.

215. acrooks ◴[] No.42145682{5}[source]
I suspect certain domains have higher performance than others. My normal use cases involve API calls, database calls, data transformation and AI fairly consistently does what I want. But in that space there are very repeatable patterns.

Also with your example above I probably would break the function down into smaller parts, for two reasons 1) you can more easily unit test the components; 2) generally I find AI performs better with more focused problems.

So I would probably first write a signature like this:

  # input examples = "1/2" "100" "0.6" "0.99999" "0.5E0" "nan"
  def string_ratio_to_decimal(text: str) -> number
Pasting that into Claude, without any other context, produces this result: https://claude.site/artifacts/58f1af0e-fe5b-4e72-89ba-aeebad...
replies(1): >>42146546 #
216. nosianu ◴[] No.42145747{3}[source]
> *"Things You Should Never Do" where he talks about the error of doing a rewrite

As a funny aside, I actually noticed this in a completely different field, serial stories on the web (mostly on RoyalRoad)!

Occasionally an author will attempt a rewrite of a story either because the feedback was very critical, or they did not like where their own story had ended up.

I have yet to see a single example for a truly successful rewrite, where the rewrite was really significantly (or at all) better than the original. Usually the rewrite will not get any better ratings or more readers than the first draft - and for good reasons.

There will be improvements, but it will be on the edges. At the core it still remains the same story with the same problems, and some style changes or some improved dialogs don't change that.

----

By the way, there is an old 2016 HN thread with 106 comments "When to Rewrite from Scratch – Autopsy of Failed Software" -- https://news.ycombinator.com/item?id=11553813

----

A rewrite story I heard a long time ago and that I think would actually work best when the issues are severe was from a company that lost all their code (I don't remember the context, it was not data loss). They had spent many years to get to where they had been when they lost everything. They thought it would take almost as many years to get there again, but they started anyway. Turned out they were done in only half a year this time, and much better!

I think having to work with and around your old code (or story, in the RoyalRoad example) is a severe limit on how much you can improve. Your thoughts are not free, most of your mental effort will be around reusing the old code.

That is my own experience too: Writing the software is not my bottleneck. It's finding out what to write in the first place, and the many many small agonizing decisions along the way. I now see that meta knowledge is far more important. For very large projects it may be more difficult though.

I did this myself once, in the early Internet growth days. The company had its own equivalent of PHP (which was still pretty new at the time) and a business software based on it. I was tasked with refactoring the 1.0 version. I threw the code away after a brief look and rewrote from scratch. I did it because I believed having to consider the existing code would be much slower than writing new.

I have no complaints about the 1.0 version, the first version is always limited by the by then still low comprehension of the problem. I think version 2.0 releases might benefit the most from just throwing the 1.x code away and starting fresh, if the understanding of the problem evolv3ed substantially during - and through - the development.

217. ponector ◴[] No.42145845{6}[source]
Can you share your .cursorrules? For me cursor is not much better than autocomplete, but I'm writing mostly e2e tests.
replies(1): >>42146031 #
218. bobnamob ◴[] No.42145889{6}[source]
Or phrased positively, Big Open Source keeping the working class developer employed
219. sarchertech ◴[] No.42145900{6}[source]
>It's a huge motivator actually to write a piece of code with the reward being the ability to send it to the LLM to create some tests and then seeing a nice stream of green checkmarks.

Yeah that’s not TDD.

replies(1): >>42146001 #
220. Aeolun ◴[] No.42145945{4}[source]
If you asked me to write tests with such a vague definition I’d also have issues writing them though. It’ll work a lot better if you tell it what you want it to validate I think.
221. ◴[] No.42145956[source]
222. MarcelOlsz ◴[] No.42146001{7}[source]
Don't you have a book to get to writing instead of leaving useless comments? Haha.
replies(1): >>42146183 #
223. MarcelOlsz ◴[] No.42146031{7}[source]
You can find a bunch on https://cursor.directory/.
224. baydonFlyer ◴[] No.42146172[source]
Click bait headline. It is an opinion piece; it may be true (or not) but there is not references or clear justifications.
225. sarchertech ◴[] No.42146183{8}[source]
More importantly I have a French cleat wall to finish, a Christmas present to make for my wife, and a toddler and infant to keep from killing themselves.

But I also have a day job and I can’t even begin to imagine how much extra work someone doing “TDD” by writing a function and then fixing it in place with a whole suite of generated tests would cause me.

I’m fine with TDD. I do it myself fairly often. I also go back in and delete the tests that I used to build it that aren’t actually going to be useful a year from now.

replies(1): >>42146498 #
226. MarcelOlsz ◴[] No.42146498{9}[source]
Like I said above, I like the ability to scaffold tests using english and tweaking from there. I'm still not sure what point you're trying to make.
replies(1): >>42148139 #
227. eesmith ◴[] No.42146546{6}[source]
> I probably would break the function down into smaller parts

Sure. Internally I have multiple functions. Though I don't like unit testing below the public API as it inhibits refactoring and gives false coverage feedback, so all my tests go through the main API.

> Pasting that into Claude, without any other context

The context is the important part. Like the context which says "0.5E0" and "nan" are specifically not supported, and how the calculations need to use decimal arithmetic, not IEEE 754 float64.

Also, the hard part is generating the complement with correct formatting, not parsing float-or-fraction, which is first-year CS assignment.

> # Handle special values

Python and C accept "Infinity" as an alternative to "Inf". The correct way is to defer to the underlying system then check if the returned value is infinite or a NaN. Which is what will happen here because when those string checks fail, and the check for "/" fails, it will correctly process through float().

Yes, this section isn't needed.

> # Handle empty string

My spec says the empty string is not an error.

> numerator, denominator = text.split("/"); num = float(numerator); den = float(denominator)

This allows "1.2/3.4" and "inf/nan", which were not in the input examples and therefore support for them should be interpreted as accidental scope creep.

They were also not part of the test suite, which means the tests cannot distinguish between these two clearly different implementations:

  num = float(numerator)
  den = float(denominator)
and:

  num = int(numerator)
  den = int(denominator)
Here's a version which follows the same style as the linked-to code, but is easier to understand:

    if not isinstance(text, str):
        return None
    
    # Remove whitespace
    text = text.strip()
    
    # Handle empty string
    if not text:
        return None

    # Handle ratio format (e.g., "1/2")
    if "/" in text:
        try:
            numerator, denominator = text.split("/")
            num = int(numerator)
            den = int(denominator)
            if den == 0:
                return float("inf") if num > 0 else float("-inf") if num < 0 else float("nan")
            return num / den
        except ValueError:
            return None

    # Handle regular numbers (inf, nan, scientific notation, etc.)
    try:
        return float(text)
    except ValueError:
        return None
It still doesn't come anywhere near handling the actual problem spec I gave.
228. dartos ◴[] No.42146828{5}[source]
Assuming both mediums are reasonably well represented in the dataset, which brings me back to my comment
229. dartos ◴[] No.42146835{7}[source]
I’ve noticed LLMs quickly turn to pulling in dependencies and making complicated code
replies(1): >>42151443 #
230. bunderbunder ◴[] No.42147304{6}[source]
IME that kind of thing is more likely to make it 300% harder.

This idea of easy, worry-free database replatforming strikes me as kind of a shibboleth for identifying people who’ve never done it before. In reality they all have subtle differences in semantics and query optimization behavior that mean that every touch point needs close attention to make sure you understand how the behavior in that part of the system changes (assume it will change) and if that change is acceptable. Thinking abstraction layers can eliminate the need for close attention to a DBMS port is the software engineering equivalent of thinking adaptive cruise control means you can play Slay the Spire while driving to the office.

replies(1): >>42153982 #
231. Ntrails ◴[] No.42147507[source]
Guy I know n days ago:

> I let AI write the parsing and hoooo boy do I regret it.

He's kindly fixed the server 500's now though xD

232. sarchertech ◴[] No.42148139{10}[source]
Your original point was that it was great to “write some code then send it to the LLM to create tests.”

That’s not test driven development.

replies(1): >>42151036 #
233. lubujackson ◴[] No.42148341{7}[source]
Seconding Cursor. I have a friend who used Copilot 6 mo. ago and found it vaguely helpful... but turned him on to Cursor and it's a whole new ballgame.

Cross between actually useful autocomplete, personalized StackOverflow and error diagnosis (just paste and error message in chat). I know I am just scratching the usefulness and I pretty much never do changes across multiple files, but I definitely see firm net positives at this point.

234. glouwbug ◴[] No.42150194{4}[source]
Exactly
235. glouwbug ◴[] No.42150241{4}[source]
I gauge what work's best if I can already do what I am asking it to do, and that comes from years of studying and trial and error experience without LLMs. I have no way of verifying what's a hallucination unless I am an expert
236. RangerScience ◴[] No.42150295{6}[source]
Came across the idea of "probe" (awhile ago) as a name for this.
237. MarcelOlsz ◴[] No.42151036{11}[source]
Sure if you want to take the absolute least charitable interpretation of what I said lol.
replies(1): >>42161327 #
238. hunterbrooks ◴[] No.42151146[source]
LLM's get relatively better at read-heavy operations (ex: code review) than write-heavy operations (ex: code generation) as codebases become less idiomatic.

I'm a cofounder at www.ellipsis.dev - we tried to build code generation for a LONG time before we realized that AI Code Review is way more doable with SOTA

239. skydhash ◴[] No.42151443{8}[source]
I'm sure they do great for scripts and other stuff. But the few times I tried, they always go for the most complicated solutions. I prefer my scripts to grow organically. Why automate something if I don't even know how it's done in the first place? (Unless someone else is maintaining the solution)
240. namaria ◴[] No.42151916{4}[source]
Nah LLMs are nothing like really smart junior developers.

Really smart junior developers actually have a shot at learning better and moving on from this stage.

241. inSenCite ◴[] No.42153961{3}[source]
Using a combination of both chatgpt and claude/sonnet. Codebase is not very complex or cutting edge (e.g., data pipeline to maintain a local db, and an analytics system). These are not enterprise or even public facing applications.

For additional context I have not been a software engineer professionally for over a decade but still am in the engineering field.

Usually I will feed in a few functions (or just 1), sometimes a whole module if it small enough, and prompt it for general performance, and maintainability improvements. I just kinda iterate from there. I also restart chats often

242. suzzer99 ◴[] No.42153982{7}[source]
It was even more ludicrous in this case because a ton of the app logic was inside stored procedures.
243. sarchertech ◴[] No.42161327{12}[source]
“write a piece of code with the reward being the ability to send it to the LLM to create some tests and then seeing a nice stream of green checkmarks”

You write code, then you send the code to the LLM to create tests for you.

How can this possibly be interpreted to mean the reverse?

That you write tests first by asking the LLM in English to help you without “sending the code” you wrote because you haven’t written it yet. Then you use those tests to help you write the code.

Now if you misspoke then my comment isn’t relevant to your situation, but don’t pretend that I somehow interpreted what you said uncharitably. There’s no other way to interpret it.

replies(1): >>42165498 #
244. MarcelOlsz ◴[] No.42165498{13}[source]
Ok you win.
replies(1): >>42165933 #
245. sarchertech ◴[] No.42165933{14}[source]
Thanks
replies(2): >>42166456 #>>42166489 #
246. ◴[] No.42166456{15}[source]
247. ◴[] No.42166489{15}[source]
248. dang ◴[] No.42167974{5}[source]
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html