Most active commenters
  • vander_elst(3)
  • pocketarc(3)

←back to thread

432 points tosh | 39 comments | | HN request time: 0.837s | source | bottom
1. vander_elst ◴[] No.39998806[source]
With all these AI tools requiring a prompt, does it really simplify/speed up things? From the example: I have to write "add a name param to the 'greeting' function, add all types", then wait for the result to be generated, read it carefully to be sure that it does what I want, probably reiterate if the result does not match the expectation. This seems to me more time consuming than actually do the work myself. Does anyone has examples where promoting and double checking is faster than doing it on your own? Is it faster when exploring new solutions and "unknown territory" and in this case, are the answers accurate (from what I tried so far they were far off)? In that case how do you compare it with "regular search" via Google/Bing/...? Sorry for the silly question but I'm genuinely trying to understand
replies(13): >>39998888 #>>39998953 #>>39998965 #>>39999501 #>>39999580 #>>39999752 #>>40000023 #>>40000260 #>>40000635 #>>40001009 #>>40001669 #>>40001763 #>>40002076 #
2. djleni ◴[] No.39998888[source]
Can’t speak for everyone else but I almost exclusively use it for what you mentioned:

> when exploring new solutions and "unknown territory"

If it’s something I have no idea how to do I might describe the problem and just look at the code it spits out; not even copy pasting but just reading for a basic idea.

> how do you compare it with "regular search" via Google/Bing

Much worse if there’s a blog post or example in documentation that’s exactly what I’m looking for, but, if it’s something novel, much better.

An example:

Recently asked how I could convert pressure and temperature data to “skew T” coordinates for a meteorological plot. Not something easy to Google, and the answers the AI gave were slightly wrong, but it gave me a foot in the door.

replies(1): >>39998943 #
3. jamil7 ◴[] No.39998943[source]
This is also where I've kind of ended up with it, I've also noticed that when I was at one point using it everyday, I'm opening it less and less, maybe a few times a week and recently cancelled my subscription. It's still pretty useful for exploritory stuff, boilerplate and sometimes it can give you a hint on debugging. Everything else I can write faster and more correctly myself.
4. rolisz ◴[] No.39998953[source]
Well, regular search means switching to a different application, with an implied context switch. It definitely takes longer for many things than just using GitHub copilot.
5. pocketarc ◴[] No.39998965[source]
Personally the use for me has been in writing boilerplate. As an example, one of my ongoing goals has been to port all the view code of a project to another framework, following its idioms. Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually, and it’d be prone to human error. It’s not technically challenging stuff, just tedious and mind-numbing, which is perfect for an LLM.

I do agree though, these basic examples do seem quite pointless, if you already know what you’re doing. It’s just as pointless as telling another developer to “add a name param to ‘greeting’ function, add all types”, which you’d then have to review.

I think it comes down to your level of experience though. If you have years and years of experience and have honed your search skills and are perfectly comfortable, then I suspect there isn’t a lot that an LLM is going to do when it comes to writing chunks of code. That’s how I’ve felt about all these “write a chunk of code” tools.

In my case, apart from automating the kind of repetitive, mindless work I mentioned, it’s just been a glorified autocomplete. It works -really- well for that, especially with comments. Oftentimes I find myself adding a little comment that explains what I’m about to do, and then boop, I’ve got the next few lines autocompleted with no surprises.

I had to work without an internet connection a few days ago and it really, really hit me how much I’ve come to use that autocomplete - I barely ever type anything to completion anymore, it was jarring, having to type everything by hand. I didn’t realise how lazy my typing had become.

replies(8): >>39999242 #>>39999317 #>>39999370 #>>39999411 #>>39999436 #>>40000278 #>>40000389 #>>40001531 #
6. nox101 ◴[] No.39999242[source]
> Using an LLM, I can process a file in a couple of seconds, and checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually

Can you explain more how "checking everything is right takes just a few seconds as well? A code review can't happen in "just a few seconds" so maybe I don't understand what the process your describing really is

replies(1): >>39999382 #
7. saurik ◴[] No.39999317[source]
> Personally the use for me has been in writing boilerplate.

We live in a world with everything from macro systems and code generation to higher-order functions and types... if you find yourself writing the same "boilerplate" enough times that you find it annoying, just automate it, the same way you can automate anything else we do using software. I have found myself writing very little "boilerplate" in my decades of software development, as I'd rather at the extreme (and it almost never comes to this) throw together a custom compiler than litter my code with a bunch of hopefully-the-same-every-time difficult-to-adjust-later "boilerplate".

replies(1): >>39999541 #
8. vander_elst ◴[] No.39999370[source]
thanks for the reply! I'll try it for commenting.
9. pocketarc ◴[] No.39999382{3}[source]
In the example I gave, it was just porting the same view code from one framework's way of writing view code to another. It's a one-off task involving hundreds of different views.

There's zero technical challenge, almost no logic, super tedious for a human to do, not quite automatable since there could be any kind of code in those views, and it's very very unlikely that the LLM gets it wrong. I give it a quick look over, it looks right, the tests pass, it's not really a big deal.

And one nice thing I did as well was ask it to "move all logic to the top of the file", which makes it -very- easy to clean up all the "quick fix" cruft that's built up over years that needs to be cleaned up or refactored out.

In those cases the file might indeed need more time dedicated to it, but it would've needed it either way.

10. vander_elst ◴[] No.39999411[source]
I don't have the impression I'm writing too much boilerplate, but I am curious about this as I have heard it multiple times: are there more examples of boiler plate that an LLM is better/faster at generating than a couple of copy/paste? If it's more than a couple than copy/paste and it's time for a rewrite, do you leverage AI for this? how do you usually introduce the abstraction?
replies(1): >>39999502 #
11. mcluck ◴[] No.39999436[source]
Call me a caveman but the lack of an option to use AI tools offline is a massive downside to me. I am connected to the internet most of the time but I take comfort in knowing that, for most of my work, I could lose my connection and not even notice
replies(2): >>39999843 #>>40001497 #
12. ithkuil ◴[] No.39999501[source]
For me a useful coding assistant would be one that looks at what I'm _doing_ and helps me complete the boring parts of the task.

The current wave of coding assistants target junior programmers who don't know how to even start approaching a task. LLMs are quite good at spitting out code that will create a widget or instantiate a client for a given API, figuring out all the parameters and all the incantations that you'd otherwise need to copy paste from a documentation. In a way they are documentation "search and digest" tools.

While that's also useful for senior developers when they need to work outside of their particular focus area, it's not that useful to help you work on a mature codebase where you have your own abstractions and all sorts of custom things that have good reasons to be there but are project specific.

Sure, we could eventually have LLMs that can be fine tuned to your specific projects, company or personal style.

But there is also another area where we can use intelligent assistants: editors.

Right now editors offer powerful tools to move around and replace text, often in ways that respects the syntax of the language. But it's cumbersome to use and learn, relying on key bindings or complicated "refactoring" commands.

I wish there was a way for me to have a smarter editor. Something that understands the syntax and a bit the semantics of the code but also the general intent of the local change in working on and the wider context so it can help me apply the right edits.

For example right now I'm factoring out a part of a larger function into it's own function so it can be called independently.

I know there are editor features that predate AI that can do this work but for various reasons I can't us it. For example, you may have started to do it manually because it seemed simple and then you realize you have to factor out 5 parameters and it becomes a boring exercise of copy paste. Another example is that the function extraction refactoring tool of your IDE just can't handle that case, for example: func A(a Foo) { b := a.GetBar(); Baz(b.X, b.Y, c, d) } you'd want to extract a function func _A(b Bar) { Baz(b.X.... and have A call that. In some simple cases the IDE can do that. In other you need to do it manually.

I want an editor extension that can help me with the boring parts of shuffling parameters around, moving them in structures etc etc all the while I'm in control of the shape of the code but I don't have to remember the advanced editor commands but instead augment my actions with some natural language comments (written or even spoken!)

13. pocketarc ◴[] No.39999502{3}[source]
One example of boilerplate that I've been automating is when you're creating model code for your ORM.

I paste the table definition into a comment, and let the LLM generate the model (if the ORM doesn't automate it), the list of validation rules, custom type casts, whatever specifics your project has. None of it is new or technically challenging, it's just autocompleting stuff I was going to write anyway.

It's not that you're writing "too much" boilerplate; this is a tiny part of my work as well. This is just the one part where I've actually found an LLM useful. Any time I feel like "yeah this doesn't require thought, just needs doing", I chuck it over to an LLM to do.

replies(1): >>40001003 #
14. rolisz ◴[] No.39999541{3}[source]
I'd say that using LLMs to write boilerplate falls under "automation"
replies(3): >>39999919 #>>39999975 #>>40000082 #
15. liampulles ◴[] No.39999580[source]
Generally, I agree. I have found it useful for writing SQL, mapping structs, converting from JSON to CSV, etc. i.e. repetitive stuff.
16. tossandthrow ◴[] No.39999752[source]
I use it as a glorified search engine and to read through bad documentation (arhm, AWS). but this only works for well documented solutions.

core programming hasn't really changed over the past years with good reason: you need. to. understand what you do. this is the bottleneck. not writing it.

17. thomashop ◴[] No.39999843{3}[source]
That's just not the reality anymore. You can run a decent open source coding language model on local hardware. Just needs a bit of work and it's not quite as seamless.
18. Liskni_si ◴[] No.39999919{4}[source]
Yes, except it's "bad automation", because as opposed to the automation referred to by GP, boilerplate written by an LLM (or an intern or whomever) is extra code that costs a lot of time to be maintained.
replies(2): >>40000752 #>>40001205 #
19. rsynnott ◴[] No.39999975{4}[source]
But, perhaps uniquely amongst all the systems for avoiding boilerplate since Lisp macros were introduced in the 1950s, it will sometimes make stuff up. I don't buy that "a worse way to write boilerplate" is going to revolutionise programming.
20. adhamsalama ◴[] No.40000023[source]
Exactly. It's almost useless to me.
21. ◴[] No.40000082{4}[source]
22. mikrotikker ◴[] No.40000260[source]
Yea that's where I've landed. Telling it what to do is time consuming.

Telling it what I want to do in a broader term and asking for code examples is a lot better., especially for something I don't know how to do.

Otherwise the autocomplete/suggestions in the editor is great for the minutia and tedious crap and utility functions. Probably saves me about 20% typing which is great on hands that have typing for 20 odd years.

It's also good for finding tools and libraries (when it doesn't hallucinate) since https://libs.garden disappeared inexplicably (dunno what to do on Friday nights now that I can't browse through that wonderful site till 2am)

23. pistacchioso ◴[] No.40000278[source]
Most of the discussions about AI applied to coding end up having someone who states that it's just not worth it (at least the moment) and someone else who then chimes in to say that they mostly use it for "boilerplate" code.

I have trouble understanding the "boilerplate" thing because avoiding writing boilerplate is

1) already a solved "problem" long before AI

2) is it really a "problem"?

The first point: * If you find yourself writing the same piece of code over and over again in the same code it's the indication that you should abstract it away as a function / class / library.

* IDEs have had snippets / code completion for a long time to save you from writing the same pieces of code.

* Large piece of recycled functionalities are generally abstracted away in libraries of frameworks.

* Things like "writing similar static websites a million times" are the reason why solutions like WordPress exist: to take away the boilerplate part of writing websites. This of course applies to solutions / technologies / services that make "avoid writing boilerplate code" their core business

* The only type of real boilerplate that comes to my mind are things like "start a new React application" but that is a thing you do once per project and it's the reason why boostrappers exist so that you only really have to type "npx create-react-app my-app" once and the boilerplate part is taken care of.

The second point: Some mundane refactoring / translations of pieces of code from one technology to the other can actually be automated by AI (I think it's what you're talking about here, but how often does one really do such tasks?), but... Do you really want to? Automate, it, I mean?

I mean, yes "let AI do the boring staff so that I can concentrate on the most interesting parts" make sense, but it's not something I want to do. Maybe it's because I'm aging, but I don't have it in me to be concentrated on demanding, difficult, tiring tasks 8 hour straight a day. It's not something that I can and it's also something that I don't want to.

I much prefer alternating hard stuff that require 100% of my attention with lighter tasks that I can do while listening to a podcast and steam off in order to rest by brain before going back to a harder task. Honestly I don't think anyone is supposed to be concentrated on demanding stuff all day long all week long. That's the recipe for a burnout.

replies(1): >>40007110 #
24. dwighttk ◴[] No.40000389[source]
>checking that everything is right takes just a few seconds as well. It’d take me hours to go through every file manually

how can you check it in a few seconds if it'd take you hours to change it manually?

25. swah ◴[] No.40000635[source]
I feel only a bit bad when deploying a billion dollar machine model to ask "how to rename a git a branch" every other week. Its the easiest way (https://github.com/tbckr/sgpt) compared to reading the manual, but reading the manual is the right way.
replies(1): >>40000651 #
26. nomoreipg ◴[] No.40000651[source]
Not sure if you're talking about chatgpt or google
27. kolme ◴[] No.40000752{5}[source]
And might be wrong.
28. ssousa666 ◴[] No.40001003{4}[source]
I've found this very useful as well. My typical workflow (server-side kotlin + spring) has been:

- create migration files locally, run statements against containerized local postgres instance - use a custom data extractor script in IntelliJ's data tool to generate r2dbc DAO files with a commented out CSV table containing column_name, data_type, kotlin_type, is_nullable as headers - let AI assistant handle the rest

29. dagw ◴[] No.40001009[source]
Does anyone has examples where promoting and double checking is faster than doing it on your own?

I find it is faster in lots of cases where the solution is 'simple' but long and and a bit fiddly. As a concrete example from earlier today, I needed a function that took a polygon and returned a list of its internal angles. Can I write it myself, sure. Did copilot generate the code (and unit tests) for me in a fraction of the time it would have taken me to do it, absolutely.

replies(1): >>40001156 #
30. navane ◴[] No.40001156[source]
Sorry I'm not in your domain at all, but shouldn't that be a library function? Properties of polygons seem pretty universal to me. Will AI replace carefully curated libraries with repeated boilerplate? Thus reducing reusabilty of human efforts?
replies(2): >>40001180 #>>40001211 #
31. SkyBelow ◴[] No.40001180{3}[source]
If there is a good library for it within the domain, ideally at some point the AI will suggest it. Can't wait until the AI writes it own library it will reference in future answers.
32. navane ◴[] No.40001205{5}[source]
In many ways it's more akin to outsourcing then automating.
33. dagw ◴[] No.40001211{3}[source]
shouldn't that be a library function

It's a balance. Sometimes it's better to just to write a 10 line function and get on with your work, rather than dragging in a huge extra dependency to your project.

34. wsintra2022 ◴[] No.40001497{3}[source]
Try ollama You can run models locally.
35. latexr ◴[] No.40001531[source]
> It’d take me hours to go through every file manually, and it’d be prone to human error.

The LLM output is also extremely prone to error, so it’s not like the second part of your sentence is a valid argument.

36. anotherpaulg ◴[] No.40001669[source]
Thanks for checking out aider.

That demo GIF is just showing a toy example. To see what it's like to work with aider on more complex changes you can check out the examples page [0].

The demo GIF was just intended to convey the general workflow that aider provides: you ask for some changes and aider shares your existing code base with the LLM, collects back the suggested code edits, applies them to your code and git commits with a sensible commit message.

This workflow is generally a big improvement over manually cutting and pasting bits of code back and forth between the ChatGPT UI and your IDE.

Beyond just sending the code that needs to be edited, aider also sends GPT a "repository map" [1] that gives it the overall context of your codebase. This makes aider more effective when working in larger code bases.

[0] https://aider.chat/examples/

[1] https://aider.chat/docs/repomap.html

37. wongarsu ◴[] No.40001763[source]
One example where I successfully used an AI tool (plain ChatGPT) went a bit like this:

Me: Can you give me code for a simple image viewer in python? It should be able to open images via a file open dialog as well as show the previous and next image in the folder

GPT: [code doing that with tkinter]

Me: That code has a bug because the path handling is wrong on windows

GPT: [tries to convince me that the code isn't broken, fixes it regardless]

Me: Can you add keyboard shortcuts for the previous and next buttons

GPT: [adds keyboard shortcuts]

After that I did all development the old fashioned way, but that alone saved me a good chunk of time. Since it was just internal tooling for myself code quality didn't matter, and I wasn't too upset about the questionable error handling choices

38. ◴[] No.40002076[source]
39. fragmede ◴[] No.40007110{3}[source]
Because we're all working at different companies on different codebases in different languages doing different things, we're all talking abstractly about something we think is the same, when really it isn't. Obvious to every programmer is to make a library if you're copy and pasting code multiple times, but the overhead of that means you don't do that if you only do that once or twice. the problem, in humans, as with LLMs, is the context window. after running create-react-app or whatever, there are a number of steps to do that are repetitive, but because that doesn't happen often enough to fully automate it, you just do it manually. LLMs let you do that level of boilerplate without all the overhead of manually configuring snippets in the IDE.