←back to thread

395 points pseudolus | 8 comments | | HN request time: 0.487s | source | bottom
Show context
ilrwbwrkhv ◴[] No.43633555[source]
AI bubble seems close to collapsing. God knows how many billions have been invested and we still don't have an actual use case for AI which is good for humanity.
replies(4): >>43633642 #>>43633992 #>>43634036 #>>43640570 #
1. boredemployee ◴[] No.43633642[source]
I think I understand what you're trying to say.

We certainly improve productivity, but that is not necessarily good for humanity. Could be even worse.

i.e.: my company already expect less time for some tasks given that they _know_ I'll probably use some AI to do tasks. Which means I can humanly handle more context in a given week if the metric is "labour", but you end up with your brain completely melted.

replies(2): >>43633701 #>>43633863 #
2. bluefirebrand ◴[] No.43633701[source]
> We certainly improve productivity

I think this is really still up for debate

We produce more output certainly but if it's overall lower quality than previous output is that really "improved productivity"?

There has to be a tipping point somewhere, where faster output of low quality work is actually decreasing productivity due to the efforts now required to keep the tower of garbage from toppling

replies(1): >>43634003 #
3. DickingAround ◴[] No.43633863[source]
I think the core of the 'improved productivity' question will be ultimately impossible to answer. We would want to know if productivity was improved over the lifetime of a society; perhaps hundreds of years. We will have no clear A/B test from which to draw causal relationships.
replies(1): >>43634040 #
4. fourseventy ◴[] No.43634003[source]
It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.
replies(3): >>43634049 #>>43634190 #>>43643260 #
5. AlexandrB ◴[] No.43634040[source]
This is exactly right. It also depends on how all the AGI promises shake out. If AGI really does emerge soon, it might not matter anymore whether students have any foundational knowledge. On the other hand, if you still need people to know stuff in the future, we might be creating a generation of citizens incapable of doing the job. That could be catastrophic in the long term.
6. AlexandrB ◴[] No.43634049{3}[source]
Meanwhile in this article/thread you have a bunch of programmers complaining that LLMs don't improve overall productivity: https://news.ycombinator.com/item?id=43633288
7. bluefirebrand ◴[] No.43634190{3}[source]
I am a programmer and my opinion is that all of the AI tooling my company is making me use gets in the way about as often as it helps. It's probably overall a net negative, because any code it produces for me takes longer for me to review and ensure correctness as it would to just write it

Does my opinion count?

8. globnomulous ◴[] No.43643260{3}[source]
> It's not up for debate. Ask any programmer if LLMs improve productivity and the answer is 100% yes.

Programmer here. The answer is 100% no. The programmers who think they're saving time are racking up debts they'll pay later.

The debts will come due when they find they've learned nothing about a problem space and failed to become experts in it despite having "written" and despite owning the feature dealing with it.

Or they'll come due as their failure to hone their skills in technical problem solving catches up to them.

Or they'll come due when they have to fix a bug that the LLM produced and either they'll have no idea how or they'll manage to fix it but then they'll have to explain, to a manager or customer, that they committed code to the codebase that they didn't understand.