←back to thread

323 points timbilt | 2 comments | | HN request time: 0.42s | source
Show context
wcfrobert ◴[] No.42131165[source]
Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?

For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...

On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.

In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?

replies(9): >>42131209 #>>42131502 #>>42131788 #>>42132365 #>>42133145 #>>42133517 #>>42133877 #>>42134499 #>>42136622 #
mav3ri3k ◴[] No.42132365[source]
A current 3rd year college student here. I really want LLMs to help me in learning but the success rate is 0.

They often can not generate relatively trivial code When they do, they can not explain that code. For example, I was trying to learn socket programing in C. Claude generated the code, but when I stared asking about stuff, it regressed hard. Also, often the code is more complex than it needs to be. When learning a topic, I want that topic, not the most common relevant code with all the spagheti used on github.

For other subjects, like dbms, computer network, when asking about concepts, you better double check, because they still make stuff up. I asked ChatGPT to solve prev year question for dbms, and it gave a long, answer which looked good on surface. But when I actually read through because I need to understand what it is doing, there were glaring flaws. When I point them out, it makes other mistakes.

So, LLMs struggle to generate concise to the point code. They can not explain that code. They regularly make stuff up. This is after trying Claude, ChatGPT and Gemini with their paid versions in various capacities.

My bottom line is, I should NEVER use a LLM to learn. There is no fine line here. I have tried again and again because tech bros keep preaching about sparks of AGI, making startup with 0 coding skills. They are either fools or genius.

LLMs are useful strictly if you already know what you are doing. That's when your productivity gains are achieved.

replies(4): >>42132578 #>>42132722 #>>42134012 #>>42134414 #
owenpalmer ◴[] No.42132722[source]
I'm starting to suspect that people generally have poor experiences with LLMs due to bad prompting skills. I would need to see your chats with it in order to know if you're telling the truth.
replies(3): >>42132863 #>>42133307 #>>42133607 #
WhyOhWhyQ ◴[] No.42132863[source]
The simpler explanation is that LLMs are not very good.
replies(1): >>42132990 #
owenpalmer ◴[] No.42132990[source]
I can get an LLM to do almost anything I want. Sometimes I need to add a lot of context. Sometimes I need to completely rewrite the prompt after realizing I wasn't communicating clearly. I almost always have to ask it to explain it's reasoning. You can't treat an LLM like a computer. You have to treat it like a weird brain.
replies(2): >>42133071 #>>42133721 #
Iulioh ◴[] No.42133071[source]
The point is, your position is against a inherent characteristic of LLMs.

LLMs hallucinate.

That's true and by how they are made it cannot be false.

Anything they generate cannot bw trusted and have to be verified.

They are good at generating fluff but i wouldn't rely on them for anything.

Ask at that temperature glass melts and you will get 5 different answers, noone true.

replies(1): >>42133135 #
owenpalmer ◴[] No.42133135[source]
It got the question correct in 3 trials, with 1 of the trials being the smaller model.

GPT4o

https://chatgpt.com/share/673578e7-e34c-8006-94e5-7e456aca6f...

GPT4o

https://chatgpt.com/share/67357941-0418-8006-a368-7fe8975fbd...

GPT4o-mini

https://chatgpt.com/share/673579b1-00e4-8006-95f1-6bc95b638d...

replies(1): >>42133199 #
1. Iulioh ◴[] No.42133199[source]
The problem with these answers is that they are right but misleading in a way.

Glass is not a pure element so that temperature is the "production temperature" but as an amorphous material it ""melts"" in the way a plastic material ""melts"" and can be worked at temperature as low as 5-700c.

I feel like without a specification the answer is wrong by omission.

What "melts" means when you are not working with a pure element is pretty messy.

This came out in a discussion for a project with a friend too obsessed with GPT (we needed that second temperature and i was "this can't be right....it's too high")

replies(1): >>42133322 #
2. mav3ri3k ◴[] No.42133322[source]
Yes. This is funny when I know what is happening and I can "guide" the LLM to the right answer. I feel that is the only correct way to use LLMs and it is very productive. However, for learning, I don't know how anyone can rely on them when we know this happens.