Most active commenters
  • ainiriand(3)

←back to thread

Claude Code now supports hooks

(docs.anthropic.com)
381 points ramoz | 17 comments | | HN request time: 0.634s | source | bottom
1. ainiriand ◴[] No.44432396[source]
I tried to make an app in claude code like they so much fanfare it could do, and it failed. It was obvious it will fail, I wanted something that I think it was not done before, using the Youtube api. But it failed nonetheless.

I am tired of pretending that this can actually pull any meaningful work besides a debug companion or a slightly enhanced google/stackoverflow.

replies(9): >>44432552 #>>44432585 #>>44432591 #>>44432791 #>>44432881 #>>44433726 #>>44433757 #>>44433869 #>>44434109 #
2. bognition ◴[] No.44432552[source]
Interesting how long ago did you do this? How long did you spend on it?

I was skeptical about Claude code and then I spent a week really learning how to use it. Within the week I had built a backend with FastAPI that supported user creation, password reset, email confirmation, a front end, and support for ouath into a few systems.

It definitely took me some time to learn how to make it work, but I’m astounded at how much work I got done and for so little typing.

replies(2): >>44432594 #>>44432813 #
3. solumunus ◴[] No.44432585[source]
If you can’t build something without Claude you will probably fail to build it with Claude.
4. lukan ◴[] No.44432591[source]
"I wanted something that I think it was not done before"

But you do know, that this is what LLMs ain't good at.

So your conclusion is somewhat off, because there are plenty of programming work of things that were done before and require just tweaking.

I mean, I am also not hooked yet and just occasionally use chatGPT/claude for concrete stuff, but I do find it useful and I do see, where it can get really useful for me (once it really knows my codebase and the libaries used and does not jump between incompatible API versions)

replies(1): >>44433591 #
5. ainiriand ◴[] No.44432594[source]
Yesterday!
6. samrus ◴[] No.44432791[source]
Its definitely not there yet. You have to babysit it alot. Its not autonomous.

The utility i find is that it helps _me_ do the real engineering work, the planning and solution architechting, and then can bang out code once it has rock solid instructions (in natural language but honestly one level above psuedocode) and then i have to review it with absolutely zero faith in its ability to do things. Then it can work well.

But its not where these guys are claiming it is

replies(2): >>44433697 #>>44433754 #
7. dakiol ◴[] No.44432813[source]
How do you know the code is solid? If your bar is “if it runs, then it is good”, then alright, otherwise you (or someone else) need to review that code. So, Yeah llms are nice but i don’t think we can just go and deploy whatever they throw at us.
replies(1): >>44432921 #
8. ramoz ◴[] No.44432881[source]
If you do not know the software design, claude code will fail. If you know the software design, you can guide it toward success.
replies(1): >>44433494 #
9. timschmidt ◴[] No.44432921{3}[source]
Do you allow interns to push to prod? Because I don't. The LLM is treated no differently. It's an extraordinarily fast, extraordinarily well read intern. You can ask it to do anything, it might succeed, anything it produces should be reviewed fully. I already interact with interns and open source project contributors in a similar fashion, so the LLM plugs right in.
10. causal ◴[] No.44433494[source]
I see this a lot on HN: approach AI hoping it will fail and declaring it useless when minimal effort produces bad results.
11. ainiriand ◴[] No.44433591[source]
Yes that is a very accurate assessment, I use Claude almost all the time for other tasks, but when it was a moment to really put it to the test I was not able to produce anything at least usable.

My request involved a local web application that acted as a server for other clients in the same network using Rust.

I wanted to use websockets, it never worked and I was never able to nudge it in any meaningful direction, it started to make circular edits on the codebase and it just never worked.

12. ramoz ◴[] No.44433697[source]
But you know just how powerful that middle bit is.

An engineer will truly 10x with these, maybe greater. So will the unskilled but they will have diminishing returns over more usage.

13. svara ◴[] No.44433726[source]
I build things that haven't been done before with Cursor all the time. You have to break it down into simple building blocks rather than specifying everything up front.

If you do it right this actually forces good design.

14. svara ◴[] No.44433754[source]
Exactly. My hunch is that those who try to find reasons why AI coding bad are either

* afraid that the demands on them in their job will increase * actually like and are attached to the act of writing out code.

Frankly I can sometimes empathize with both but their conclusions are still wrong.

15. stpedgwdgfhgdd ◴[] No.44433757[source]
Use TDD so CC can converge. I see people tuning a prompt, that is fun, not software engineering.
16. ryandvm ◴[] No.44433869[source]
I have a side project Android app. To test Claude Code, I loaded it up in the repo and asked it to add a subscription billing capability to the app. Not rocket science, but it probably would have taken me a day or two to figure out the mechanics of Google Play subscription billing and implement it in code.

Claude Code did it in 30 seconds and it works flawlessly.

I am so confused how people are not seeing this as a valuable tool. Like, are you asking it to solve P versus NP or something?

If you need to do something that's been done a million times, but you don't have experience with it, LLMs are an excellent way to get running quickly.

17. ◴[] No.44434109[source]