←back to thread

638 points wut42 | 8 comments | | HN request time: 0.877s | source | bottom
Show context
arrowsmith ◴[] No.44328363[source]
Ah man I'm really happy to see this and excited to try it out.

As an Elixir enthusiast I've been worried that Elixir would fall behind because the LLMs don't write it as well as they write bigger languages like Python/JS. So I'm really glad to see such active effort to rectify this problem.

We're in safe hands.

replies(12): >>44328630 #>>44328683 #>>44328727 #>>44328801 #>>44328898 #>>44329433 #>>44329534 #>>44329569 #>>44329604 #>>44329853 #>>44330513 #>>44331985 #
1. zorrolisto ◴[] No.44328683[source]
Same, I watched a video from Theo where he says Next.js and Python will be the best languages because LLMs know them well, but if the model can infer, it shouldn’t be a problem.
replies(3): >>44328738 #>>44328870 #>>44328963 #
2. rramon ◴[] No.44328738[source]
Folks on YouTube have used Claude Code and the new Tidewave.ai MCP (for Elixir and Rails) to vibe code a live polling app in Cursor without writing a line of code. The 2hr session is on YT.
replies(1): >>44329940 #
3. dingnuts ◴[] No.44328870[source]
since models can't reason, as you just pointed out, and need examples to do anything, and the LLM companies are abusing everyone's websites with crawlers, why aren't we generating plausible looking but non working code for the crawlers to gobble, in order to poison them?

I mean seriously, fuck everything about how the data is gathered for these things, and everything that your comment implies about them.

The models cannot infer.

The upside of my salty attitude is that hordes of vibe coders are actively doing what I just suggested -- unknowingly.

replies(2): >>44328900 #>>44328911 #
4. fragmede ◴[] No.44328900[source]
But the models can run tools, so wouldn't they just run the code, not get the expected output, and then exclude the bad code from their training data?
replies(1): >>44329340 #
5. Imustaskforhelp ◴[] No.44328911[source]
For what its worth, AI already has subpar data. Atleast this is what I've heard.

I am not sure, but the cat is out of the box. I don't think we can do anything at this point.

6. bee_rider ◴[] No.44329340{3}[source]
That seems like a feedback loop that’s unlikely to exist currently. I guess if intentionally plausible but bad data became a really serious problem, the loop could be created… maybe? Although it would be necessary to attribute a bit of code output back to the training data that lead to it.
7. rahimnathwani ◴[] No.44329940[source]
This one?

https://www.youtube.com/live/V2b6QCPgFTk

replies(1): >>44331073 #
8. rramon ◴[] No.44331073{3}[source]
That's the one.