←back to thread

627 points cratermoon | 5 comments | | HN request time: 0.284s | source
Show context
gyomu ◴[] No.44461457[source]
Broadly agreed with all the points outlined in there.

But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.

> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.

> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”

When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.

But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.

I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.

replies(16): >>44461502 #>>44461693 #>>44461707 #>>44461712 #>>44461825 #>>44461881 #>>44461890 #>>44462182 #>>44462219 #>>44462354 #>>44462799 #>>44463172 #>>44463206 #>>44463495 #>>44463650 #>>44464426 #
autumnstwilight ◴[] No.44461825[source]
I learned Japanese by painstakingly translating interviews and blog posts from my favorite artist 15+ years ago, dictionary in hand. I also live and work in Japan now. Today I can click a button under the artist's tweets and get an instant translation that looks coherent (and often is, though it can also be quite wrong maybe 1/10 times).

In terms of the artist being accessible to overseas fans it's a great improvement, but I do wonder if I had grown up with this, would I have had any motivation to learn?

replies(2): >>44461882 #>>44498579 #
franciscop ◴[] No.44461882[source]
I am learning Japanese (again) now and it's such a stark improvement vs when I first tried. When I don't understand something, LLMs explain it perfectly well, and with a bit of prompting they give me the right practice bits I need for my level.

For a specific example, when 2 grammar points seem to mean the same thing, teachers here in Japan would either not explain the difference, or make a confusing explanation in Japanese.

It's still private-ish/only for myself, but I generated all of this with LLMs and using it to learn (I'm around N4~N3) :

- Grammar: https://practice.cards/grammar

- Stories, with audio (takes a bit to load): https://practice.cards/stories

It's true though that you still need the motivation, but there are 2 sides of AI here and just wanted to give the other side.

replies(2): >>44461970 #>>44463295 #
1. andrepd ◴[] No.44463295[source]
> When I don't understand something, LLMs explain it perfectly well

But my man, how do you know if it explains perfectly well or is just generating plausible-sounding slop? You're learning, so by definition you don't know!

replies(3): >>44463508 #>>44463540 #>>44464516 #
2. franciscop ◴[] No.44463508[source]
Because at the beginning I didn't trust it and verified it in many different ways. I've got a fairly decent understanding of what LLMs hallucinate regarding language learning and levels, and luckily/unluckily I'm far enough for this to be a concern. e.g. I asked recently differences between 回答 vs 解答 and it was pretty good.

I also checked with some Japanese and my own notes contain more errors than the LLMs output by a large margin.

3. ted_bunny ◴[] No.44463540[source]
GPT was dismally, consistently wrong about 101-level French grammar. Isn't that info all over the internet? Shouldn't that be an easy task?
4. Tijdreiziger ◴[] No.44464516[source]
Yeah, my experience with AI and Japanese is quite the opposite. I used to use the Drops app for learning vocabulary, until they added genAI explanations, because the explanations were just wrong half the time! I had to uninstall the app!

Similarly, I used the Busuu app for a while. One of its features is that you can speak or type sentences, and ask native speakers to give feedback. But of course, you have to wait for the feedback (especially with time zone differences), so they added genAI feedback.

Like, what’s the point of this? It’s like that old joke: “We have purposely trained him wrong, as a joke”!

replies(1): >>44498604 #
5. johnnyanmac ◴[] No.44498604[source]
Sounds like the other user tolerates the flaws and works around them, while you don't and expect to not be fact checking every statement. I guess people's approaches to this can be very wide.

I am more on your side, personally. When learning, I do not want to spend half my time scrutinizing the teacher. At least not on the objective fundamentals. If the fundamentals are broken, how do I trust anything built on top?

And that's the scary thing: LLM's don't "build on top", they more or less do the equivalent of running around the globe first and come back with an answer.