←back to thread

760 points MindBreaker2605 | 6 comments | | HN request time: 0.001s | source | bottom
Show context
sebmellen ◴[] No.45897467[source]
Making LeCun report to Wang was the most boneheaded move imaginable. But… I suppose Zuckerberg knows what he wants, which is AI slopware and not truly groundbreaking foundation models.
replies(20): >>45897481 #>>45897498 #>>45897518 #>>45897885 #>>45897970 #>>45897978 #>>45898040 #>>45898053 #>>45898092 #>>45898108 #>>45898186 #>>45898539 #>>45898651 #>>45898727 #>>45899160 #>>45899375 #>>45900884 #>>45900885 #>>45901421 #>>45903451 #
gnaman ◴[] No.45897498[source]
He is also not very interested in LLMs, and that seems to be Zuck's top priority.
replies(2): >>45897523 #>>45898412 #
tinco ◴[] No.45897523[source]
Yeah I think LeCun is underestimating the impact that LLM's and Diffusion models are going to have, even considering the huge impact they're already having. That's no problem as I'm sure whatever LeCun is working on is going to be amazing as well, but an enterprise like Facebook can't have their top researcher work on risky things when there's surefire paths to success still available.
replies(12): >>45897552 #>>45897567 #>>45897579 #>>45897666 #>>45897673 #>>45898027 #>>45898041 #>>45898615 #>>45898873 #>>45899785 #>>45900106 #>>45900288 #
1. netdevphoenix ◴[] No.45898615{3}[source]
>the huge impact they're already having

In the software development world yes, outside of that, virtually none. Yes, you can transcribe a video call in Office, yes, but that's not ground breaking. I dare you to list 10 impacts on different fields, excluding tech and including at least half blue collar fields and at least half white collar fields , at different levels from the lowest to the highest in the company hierarchy, that LLM/Diffusion models are having. Impact here specifically means a significant reduction of costs or a significant increase of revenue. Go on

replies(4): >>45898646 #>>45898734 #>>45898854 #>>45898857 #
2. arcticbull ◴[] No.45898646[source]
I'm also not sure it even drives a ton of value in software engineering. It makes the easy part easier and the hard part harder. Typing out software in your mind was never the difficult part. Figuring out what to write, how to interpret specs in context, how to make your code work within the context of a broader whole, how to be extensible, maintainable, reliable, etc. That's hard, and LLMs really don't help.

Even when writing, it shifts the mental burden from an easy thing (writing code) to a very hard thing (reading that code, validating it's right, hallucination free, and then refactoring it to match your teams code style and patterns).

It's great for building a first-order approximation of a tech demo app that you then throw out and build from scratch, and auto-complete. In my experience, anyways. I'm sure others have had different experiences.

3. pegasus ◴[] No.45898734[source]
You already mentioned two fields they have a huge impact on, software development and NLP (this latter one the most impacted so far). Another field that comes to mind is academic research is getting an important boost as well, via semantic search or more advanced stuff like Google's biological cell model which already uncovered new treatments. I'm sure I'm missing a lot of other fields I'm less familiar with (legal, for example). But just these impacts I listed are all huge and they will indirectly have a huge impact on all other areas of human industry, it's just a matter of time. "Software will eat the world" and all that.
4. olalonde ◴[] No.45898854[source]
Personally, I find myself using LLMs more than Google now, even for non-development tasks. I think this shift is going to become the new normal (if it isn't already).
replies(1): >>45901884 #
5. antegamisou ◴[] No.45898857[source]
I don't think you'll find many here believing anything outside tech is worth investing into, it's schizophrenic isn't it.
6. antegamisou ◴[] No.45901884[source]
And what's the end result? All one can see is just bigger representation of those who confidently subscribe to false information and become arrogant when their validity is questioned, as the LLM writing style has convinced them it's some sort of authority. Even people on this website are so misinformed to believe that ChatGPT has developed its own reasoning, despite it being at the core an advanced learning algorithm trained on a enormous amount of human generated data.

And let's not speak about those so deep into sloth that put it into use to deteriorate, and not augment as they claim to do, humane creative recreational activities.

https://archive.ph/fg7HE