←back to thread

644 points bradgessler | 5 comments | | HN request time: 0.946s | source
Show context
abathologist ◴[] No.44010933[source]
I think we are going to be seeing a vast partitioning in society in the next months and years.

The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

replies(6): >>44011338 #>>44011643 #>>44012297 #>>44012674 #>>44012689 #>>44017606 #
emporas ◴[] No.44011643[source]
It is knowledge that gets automated, rather than reasoning.

I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.

Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.

For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.

Depends on how you use it, it can enhance human capabilities, or indeed, mute them.

replies(4): >>44011741 #>>44011921 #>>44012186 #>>44012486 #
jen729w ◴[] No.44011741[source]
Oh turns out ChatGPT generates exactly the level of banality that one would expect.

https://chatgpt.com/canvas/shared/6827fcdd3ec88191ab6a2f3297...

I don't want to read this story. I probably want to read one that a human author laboured over.

replies(2): >>44012195 #>>44012474 #
visarga ◴[] No.44012195[source]
It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future. Of course they are going to get better. But that is not the point - it is that in the chat room the human and LLM spark ideas off each other. Humans come with their own unique life experience and large context, LLMs come with their broad knowledge and skills.
replies(3): >>44012320 #>>44012479 #>>44014795 #
1. bccdee ◴[] No.44014795[source]
> It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future.

Imagine a chef, congenitally unable to taste or smell food, who has nevertheless studied a million recipes. Can they reproduce existing recipes? Sure, if they follow the instructions perfectly. Can they improvise original recipes? I doubt it. Judging by the instructions alone, the recipes they invent may be indistinguishable from real recipes, but this chef can never actually try their food to see if it tastes good. The only safe flavour combinations are the ones they reuse. This is a chef who cannot create.

LLMs are structurally banal. The only plausible route to a machine which can competently produce original art requires the development of a machine which can accurately model human's aesthetic sensibilities—something which humans themselves cannot do and have no need for, since we already have those aesthetic sensibilities built in.

This is the fundamental error of using an LLM as a ghostwriter. Humans don't only bring inspiration to the table—they also bring the aesthetic judgement which shapes the final product. Sentences written by an LLM are banal sentences, no matter how you prompt it.

replies(2): >>44015889 #>>44017527 #
2. emporas ◴[] No.44015889[source]
Head over to groq.com, use the qwen-qwq-32b model, and take these examples [1] and put them at the start before the prompt. After that use the following command:

write chapter 1 for a new Novel in Progress, take inspiration from the example Novel but DO NOT Repeat Example. Add vivid imagery, in a dark comedy style. dial up the humor and irony and use first person narration. Fracture sentences and emphasize the unusual: use unusual word orders, such as placing adjectives after nouns or using nouns as verbs, use linguistic voice pyrotechnics, telegraphically leaned and verbal agility in plot building intention, reflection, dialog, action, and describe solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere.

[1] https://gist.github.com/pramatias/953f6e3420f46f31410e8dd3c8...

replies(1): >>44015972 #
3. techno_tsar ◴[] No.44015972[source]
This is unreadable slop.
replies(1): >>44016179 #
4. emporas ◴[] No.44016179{3}[source]
Depending on the story, the examples have to be adjusted. But of course, logical reasoning from humans cannot be replicated just like that, by the machines.

The real question is this: Suppose a person was great at reasoning the last 100 years, but with zero knowledge. That person might not attended any school, almost illiterate. But his reasoning is top notch. I don't know if you are familiar with Sultan Khan [1] for example.

With no formal training to absorb a lot of knowledge, that person is totally economically crashed. There is no chance of being competitive at anything, not involving muscles anyway. Now suppose that this person can complement his lack of knowledge with a magical knowledge machine. Suddenly he is ahead of a competition, involving people with 10 Phds, or doctors with 30 years of experience.

[1] https://en.wikipedia.org/wiki/Sultan_Khan_(chess_player)

5. imperfect_blue ◴[] No.44017527[source]
As an amateur home-cook, I find current LLMs incredibly useful as a sounding board for the on-the-fly recipe modifications - for allergies and food sensitivities, adapting preparation methods to available equipment, or substituting produce not available in season. It may not be able to taste the final product, but its reasoning on what's likely to work (and what isn't) has not led me wrong so far.