Most active commenters

    ←back to thread

    549 points thecr0w | 11 comments | | HN request time: 0.009s | source | bottom
    Show context
    thuttinger ◴[] No.46184466[source]
    Claude/LLMs in general are still pretty bad at the intricate details of layouts and visual things. There are a lot of problems that are easy to get right for a junior web dev but impossible for an LLM. On the other hand, I was able to write a C program that added gamma color profile support to linux compositors that don't support it (in my case Hyprland) within a few minutes! A - for me - seemingly hard task, which would have taken me at least a day or more if I didn't let Claude write the code. With one prompt Claude generated C code that compiled on first try that:

    - Read an .icc file from disk

    - parsed the file and extracted the VCGT (video card gamma table)

    - wrote the VCGT to the video card for a specified display via amdgpu driver APIs

    The only thing I had to fix was the ICC parsing, where it would parse header strings in the wrong byte-order (they are big-endian).

    replies(3): >>46184840 #>>46185379 #>>46185476 #
    jacquesm ◴[] No.46185379[source]
    Claude didn't write that code. Someone else did and Claude took that code without credit to the original author(s), adapted it to your use case and then presented it as its own creation to you and you accepted this. If a human did this we probably would have a word for them.
    replies(16): >>46185404 #>>46185408 #>>46185442 #>>46185473 #>>46185478 #>>46185791 #>>46185885 #>>46185911 #>>46186086 #>>46186326 #>>46186420 #>>46186759 #>>46187004 #>>46187058 #>>46187235 #>>46188771 #
    bsaul ◴[] No.46185478[source]
    That's an interesting hypothesis : that LLM are fundamentally unable to produce original code.

    Do you have papers to back this up ? That was also my reaction when i saw some really crazy accurate comments on some vibe coded piece of code, but i couldn't prove it, and thinking about it now i think my intuition was wrong (ie : LLMs do produce original complex code).

    replies(7): >>46185592 #>>46185822 #>>46186708 #>>46187030 #>>46187456 #>>46188840 #>>46191020 #
    1. fpoling ◴[] No.46185822[source]
    Pick up a book about programming from seventies or eighties that was unlikely to be scanned and feed into LLM. Take a task from it and ask LLM to write a program from it that even a student can solve within 10 minutes. If the problem was not really published before, LLM fails spectacularly.
    replies(4): >>46185881 #>>46185976 #>>46186648 #>>46187999 #
    2. crawshaw ◴[] No.46185881[source]
    This does not appear to be true. Six months ago I created a small programming language. I had LLMs write hundreds of small programs in the language, using the parser, interpreter, and my spec as a guide for the language. The vast majority of these programs were either very close or exactly what I wanted. No prior source existed for the programming language because I created it whole cloth days earlier.
    replies(2): >>46186205 #>>46186214 #
    3. anjel ◴[] No.46185976[source]
    Sometimes its generated, and many times its not. Trivial to denote, but its been deemed non of your business.
    4. jazzyjackson ◴[] No.46186205[source]
    Obviously you accidentally recreated a language from the 70s :P

    (I created a template language for JSON and added branching and conditionals and realized I had a whole programming language. Really proud of my originality until i was reading Ted Nelson's Computer Lib/Dream Machines and found out I reinvented TRAC, and to some extent, XSLT. Anyway LLMs are very good at reasoning about it because it can be constrained by a JSON schema. People who think LLMs only regurgitate haven't given it a fair shot)

    replies(1): >>46186346 #
    5. fpoling ◴[] No.46186214[source]
    Languages with reasonable semantics are rather similar and LLMs are good at detecting that and adapting from other languages.
    replies(1): >>46188712 #
    6. zahlman ◴[] No.46186346{3}[source]
    FWIW, I think a JSON-based XSLT-like thing sounds far more enjoyable to use than actual XSLT, so I'd encourage you to show it off.
    7. ahepp ◴[] No.46186648[source]
    You've done this? I would love to read more about it
    8. handoflixue ◴[] No.46187999[source]
    It's telling that you can't actually provide a single concrete example - because, of course, anyone skilled with LLMs would be able to trivially solve any such example within 10 minutes.

    Perhaps the occasional program that relies heavily on precise visual alignment will fail - but I dare say if we give the LLM the same grace we'd give a visually impaired designer, it can do exactly as well.

    replies(1): >>46189065 #
    9. pertymcpert ◴[] No.46188712{3}[source]
    Sounds like creativity and intelligence to me.
    replies(1): >>46192341 #
    10. tovej ◴[] No.46189065[source]
    I recently asked an LLM to give me one of the most basic and well-documented algorithms in the world: a blocked matrix multiply. It's essentially a few nested loops and some constants for the block size.

    It failed massively, spitting out garbage code, where the comments claimed to use blocking access patterns, but the code did not actually use them at all.

    LLMs are, frankly, nearly useless for programming. They may solve a problem every once in a while, but once you look at the code, you notice it's either directly plagiarized or bad quality (or both, I suppose, in the latter case).

    11. tatjam ◴[] No.46192341{4}[source]
    I think the key is that the LLM is having no trouble mapping from one "embedding" of the language to another (the task they are best performers at!), and that appears extremely intelligent to us humans, but certainly is not all there's to intelligence.

    But just take a look at how LLMs struggle to handle dynamical, complex systems such as the "vending machine" paper published some time ago. Those kind of tasks, which we humans tend to think of as "less intelligent" than say, converting human language to a C++ implementation, seem to have some kind of higher (or at least, different) complexity than the embedding mapping done by LLMs. Maybe that's what we typically refer to as creativity? And if so, modern LLMs certainly struggle with that!

    Quite sci-fi that we have created a "mind" so alien we struggle to even agree on the word to define what it's doing :)