Most active commenters
  • emporas(3)

←back to thread

646 points bradgessler | 15 comments | | HN request time: 1.695s | source | bottom
Show context
abathologist ◴[] No.44010933[source]
I think we are going to be seeing a vast partitioning in society in the next months and years.

The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

replies(6): >>44011338 #>>44011643 #>>44012297 #>>44012674 #>>44012689 #>>44017606 #
1. emporas ◴[] No.44011643[source]
It is knowledge that gets automated, rather than reasoning.

I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.

Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.

For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.

Depends on how you use it, it can enhance human capabilities, or indeed, mute them.

replies(4): >>44011741 #>>44011921 #>>44012186 #>>44012486 #
2. jen729w ◴[] No.44011741[source]
Oh turns out ChatGPT generates exactly the level of banality that one would expect.

https://chatgpt.com/canvas/shared/6827fcdd3ec88191ab6a2f3297...

I don't want to read this story. I probably want to read one that a human author laboured over.

replies(2): >>44012195 #>>44012474 #
3. 8note ◴[] No.44011921[source]
hmm

ive been thinking that the knowledge isnt written down, so cant be automated, which also makes knowledge sharing hard, but the reasoning is automated

so, ive been trying to figure out patterns by which the knowledge does get written down, and so can be reasoned about

4. jrvarela56 ◴[] No.44012186[source]
My initial hunch and many answers in this site say ‘it’s boring I wouldn’t read that’.

There’s something to that: a good author synthesizes experiences into sentences/paragraphs, making the reader feel things via text.

I have a feeling LLMs can’t do that bc they are trained on all the crap that’s been written and it’s hard to fake being genuine.

But I agree you can generate any amount of filler/crap. It is useful, but what I got from GP was ‘ultimately, what’s the point of that?’. Hopefully these tools help us wake up to what is important.

5. visarga ◴[] No.44012195[source]
It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future. Of course they are going to get better. But that is not the point - it is that in the chat room the human and LLM spark ideas off each other. Humans come with their own unique life experience and large context, LLMs come with their broad knowledge and skills.
replies(3): >>44012320 #>>44012479 #>>44014795 #
6. aorloff ◴[] No.44012320{3}[source]
There is a Borges short story written in the 1930s about "the Library" a supposed collection of all possible permutations of language, even misspellings and gibberish. In many ways, it is extremely prescient of AI.

To cut it short, in the end what Borges proposed is that the meaning comes from the stories, and that all the stories are really repetitions and permutations of the same set of humans stories (the Order) and that is what makes meaning.

So all a successful literary AI needs to do is figure out how to retell the same stories we have been telling but in a different context that is resonant today.

Simple right ?

7. WhyIsItAlwaysHN ◴[] No.44012474[source]
O3s story is not amazing but it sure is orders of magnitude more interesting than your example:

https://chatgpt.com/share/68282eb2-e53c-8000-853f-9a03eee128...

I don't think it's possible to generate an acceptable story without reasoning.

That is not to say that I disagree with you. I would prefer to read human authors even if the AI was great at writing stories, because there's something alluring about getting a glimpse into a world that somebody else created in their head.

replies(1): >>44015618 #
8. parodysbird ◴[] No.44012479{3}[source]
This is basically a contemporary reframing of the core purpose of Renaissance magic. I suppose aspiring to be a 21st century John Dee from talking to some powerful chatbot of the future, rather than angels or elemental beings, does sound a bit exciting, but it is ultimately mysticism all the same.
9. campers ◴[] No.44012486[source]
There is a huge focus on training the LLMs to reason, that ability will slowly (or not that slowly depending on your timeframe!) but surely improve in the AI models given the gargantuan amount of money and talent being thrown at the problem. To what level we'll have to wait and see.
10. bccdee ◴[] No.44014795{3}[source]
> It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future.

Imagine a chef, congenitally unable to taste or smell food, who has nevertheless studied a million recipes. Can they reproduce existing recipes? Sure, if they follow the instructions perfectly. Can they improvise original recipes? I doubt it. Judging by the instructions alone, the recipes they invent may be indistinguishable from real recipes, but this chef can never actually try their food to see if it tastes good. The only safe flavour combinations are the ones they reuse. This is a chef who cannot create.

LLMs are structurally banal. The only plausible route to a machine which can competently produce original art requires the development of a machine which can accurately model human's aesthetic sensibilities—something which humans themselves cannot do and have no need for, since we already have those aesthetic sensibilities built in.

This is the fundamental error of using an LLM as a ghostwriter. Humans don't only bring inspiration to the table—they also bring the aesthetic judgement which shapes the final product. Sentences written by an LLM are banal sentences, no matter how you prompt it.

replies(2): >>44015889 #>>44017527 #
11. randcraw ◴[] No.44015618{3}[source]
> I don't think it's possible to generate an acceptable story without reasoning.

If I look back at any article, book, movie, or conversation that I liked, it always had this essential ingredient: it had to make sense, AND it had to introduce some novel fact (or idea) that led to implications that were entertaining somehow (intriguing, revelatory, amusing, etc).

Would this be possible without the author having some idea of how reasoning works? Or of what facts are novel or could lead to surprise of some kind? No is the obvious answer to both. Until I see clear evidence that LLMs have mastered both logic and the concept of what knowledge is and is not intriguing to a human, I foresee little creative output from any LLM that will 'move the needle' creatively.

Until then, LLM-generated fare will remain the uninspired factory produce of infinite monkeys and typewriters...

12. emporas ◴[] No.44015889{4}[source]
Head over to groq.com, use the qwen-qwq-32b model, and take these examples [1] and put them at the start before the prompt. After that use the following command:

write chapter 1 for a new Novel in Progress, take inspiration from the example Novel but DO NOT Repeat Example. Add vivid imagery, in a dark comedy style. dial up the humor and irony and use first person narration. Fracture sentences and emphasize the unusual: use unusual word orders, such as placing adjectives after nouns or using nouns as verbs, use linguistic voice pyrotechnics, telegraphically leaned and verbal agility in plot building intention, reflection, dialog, action, and describe solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere.

[1] https://gist.github.com/pramatias/953f6e3420f46f31410e8dd3c8...

replies(1): >>44015972 #
13. techno_tsar ◴[] No.44015972{5}[source]
This is unreadable slop.
replies(1): >>44016179 #
14. emporas ◴[] No.44016179{6}[source]
Depending on the story, the examples have to be adjusted. But of course, logical reasoning from humans cannot be replicated just like that, by the machines.

The real question is this: Suppose a person was great at reasoning the last 100 years, but with zero knowledge. That person might not attended any school, almost illiterate. But his reasoning is top notch. I don't know if you are familiar with Sultan Khan [1] for example.

With no formal training to absorb a lot of knowledge, that person is totally economically crashed. There is no chance of being competitive at anything, not involving muscles anyway. Now suppose that this person can complement his lack of knowledge with a magical knowledge machine. Suddenly he is ahead of a competition, involving people with 10 Phds, or doctors with 30 years of experience.

[1] https://en.wikipedia.org/wiki/Sultan_Khan_(chess_player)

15. imperfect_blue ◴[] No.44017527{4}[source]
As an amateur home-cook, I find current LLMs incredibly useful as a sounding board for the on-the-fly recipe modifications - for allergies and food sensitivities, adapting preparation methods to available equipment, or substituting produce not available in season. It may not be able to taste the final product, but its reasoning on what's likely to work (and what isn't) has not led me wrong so far.