←back to thread

323 points timbilt | 1 comments | | HN request time: 0.202s | source
Show context
wcfrobert ◴[] No.42131165[source]
Lots of interesting debates in this thread. I think it is worth placing writing/coding tasks into two buckets. Are you producing? Or are you learning?

For example, I have zero qualms about relying on AI at work to write progress reports and code up some scripts. I know I can do it myself but why would I? I spent many years in college learning to read and write and code. AI makes me at least 2x more efficient at my job. It seems irrational not to use it. Like a farmer who tills his land by hand rather than relying on a tractor because it builds character or something. But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come...

On the other hand, if you are a student trying to learn something new, relying on AI requires walking a fine line. You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply. At the same time, if you under-rely on AI, you drastically decrease the rate at which you can learn new things.

In the old days, people were fit because of physical labor. Now people are fit because they go to the gym. I wonder if there will be an analog for intellectual work. Will people be going to "mental" gyms in the future?

replies(9): >>42131209 #>>42131502 #>>42131788 #>>42132365 #>>42133145 #>>42133517 #>>42133877 #>>42134499 #>>42136622 #
sbuttgereit ◴[] No.42131788[source]
"But there is something to be said about atrophy. If you don't use it, you lose it. I wonder if my coding skill will deteriorate in the years to come..."

"You don't want to over-rely on AI because a certain degree of "productive struggle" is essential for learning something deeply."

These two ideas are closely related and really just different aspects of the same basic frailty of the human intellect. Understanding that I think can really inform you about how you might use these tools in work (or life) and where the lines need to be drawn for your own personal circumstance.

I can't say I disagree with anything you said and think you've made an insightful observation.

replies(2): >>42132052 #>>42132729 #
kamaal ◴[] No.42132729[source]
>>I wonder if my coding skill will deteriorate in the years to come...

Well that's not how LLMs work. Don't use an LLM to do thinking for you. You use LLMs to work for you, while you tell(after thinking) it what's to be done.

Basically things like-

. Attach a click handler to this button with x, y, z params and on click route it to the path /a/b/c

. Change the color of this header to purple.

. Parse the json in param 'payload' and pick up the value under this>then>that and return

etc. kind of dictation.

You don't ask big questions like 'Write me a todo app', or 'Write me this dashboard'. Those are too broad questions.

You will still continue to code and work like you always have. Except that you now have a good coding assistant that will do the chore of typing for you.

replies(3): >>42133224 #>>42134781 #>>42139252 #
dawidloubser ◴[] No.42134781[source]
I think that anybody who finds the process of clumsily describing the above examples to an LLM in some text box using english and waiting for it to spit out some code which you hope is suitable for your given programming context and codebase more efficient than just expressing the logic directly in your programming language in an efficient editor, probably suffers from multiple weaknesses:

- Poor editor / editing setup

- Poor programming language and knowledge thereof

- Poor APIs and/or knowledge thereof

Mankind has worked for decades to develop elegant and succinct programming languages within which to express problems and solutions, and compilers with deterministic behaviour to "do the work for us".

I am surprised that so many people in the software engineering field are prepared to just throw all of this away (never mind develop it further) in exchange for using a poor "programming language" (say, english) to express problems clumsily in a roudabout way, and then throw away the "source code" (the LLM prompt) entirely such to simply paste the "compiler output" (code the LLM spewed out which may or may not be suitable or correct) into some heterogenous mess of multiple different LLM outputs pasted together in a codebase held together by nothing more than the law of averages, and hope.

Then there's the fun fact that every single LLM prompt interaction consumes a ridiculous amount of energy - I heard figures such as the total amount required to recharge a smartphone battery - in an era where mankind is racing towards an energy cliff. Vast, remote data centres filled with GPUs spewing tonnes of CO₂ and massive amounts of heat to power your "programming experience".

In my opinion, LLMs are a momentous achievement with some very interesting use-cases, but they are just about the most ass-backwards and illogical way of advancing the field of programming possible.

replies(3): >>42135178 #>>42137186 #>>42140996 #
dasKrokodil ◴[] No.42135178[source]
At the time I'm writing this, there are over 260 comments to this article and yours is still the only one that mentions the enormous energy consumption.

I wonder whether this is because people don't know about it or because they simply don't care...

But I, for one, try to use AI as sparingly as possible for this reason.

replies(1): >>42136529 #
ctrl4th ◴[] No.42136529[source]
You're not alone. With the inclusion of gemini generated answers in google search, its going down the road of most capitalistic things. Where you see something is wrong, but you have no option to use it even if you don't want it.
replies(1): >>42145049 #
1. dawidloubser ◴[] No.42145049[source]
I like to idealistically think that in a capitalistic (free market) society we absolutely have the option to not use things that we think are wrong or don't like.

Change your search engine to one that doesn't include AI-generated answers. If none exist any more, all of Google's customers could write to them telling them that they don't want this feature and are switching away from them because of it, etc.

I know that internet-scale search is perhaps a bad example because it's so extremely difficult and expensive to build and run, but ultimately the choice is in the consumers' hands.

If the market makes it clear that there is a need for a search engine without LLM-generated answers at the top, somebody will provide one! It's complacency and acceptance that leads apparently-delusional companies to just push features and technologies that nobody wants.

I feel much the same way about the ridiculous things happening with cars and the automotive sector in general.