←back to thread

647 points bradgessler | 1 comments | | HN request time: 0.219s | source
Show context
abathologist ◴[] No.44010933[source]
I think we are going to be seeing a vast partitioning in society in the next months and years.

The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

replies(6): >>44011338 #>>44011643 #>>44012297 #>>44012674 #>>44012689 #>>44017606 #
emporas ◴[] No.44011643[source]
It is knowledge that gets automated, rather than reasoning.

I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.

Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.

For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.

Depends on how you use it, it can enhance human capabilities, or indeed, mute them.

replies(4): >>44011741 #>>44011921 #>>44012186 #>>44012486 #
jen729w ◴[] No.44011741[source]
Oh turns out ChatGPT generates exactly the level of banality that one would expect.

https://chatgpt.com/canvas/shared/6827fcdd3ec88191ab6a2f3297...

I don't want to read this story. I probably want to read one that a human author laboured over.

replies(2): >>44012195 #>>44012474 #
visarga ◴[] No.44012195[source]
It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future. Of course they are going to get better. But that is not the point - it is that in the chat room the human and LLM spark ideas off each other. Humans come with their own unique life experience and large context, LLMs come with their broad knowledge and skills.
replies(3): >>44012320 #>>44012479 #>>44014795 #
bccdee ◴[] No.44014795[source]
> It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future.

Imagine a chef, congenitally unable to taste or smell food, who has nevertheless studied a million recipes. Can they reproduce existing recipes? Sure, if they follow the instructions perfectly. Can they improvise original recipes? I doubt it. Judging by the instructions alone, the recipes they invent may be indistinguishable from real recipes, but this chef can never actually try their food to see if it tastes good. The only safe flavour combinations are the ones they reuse. This is a chef who cannot create.

LLMs are structurally banal. The only plausible route to a machine which can competently produce original art requires the development of a machine which can accurately model human's aesthetic sensibilities—something which humans themselves cannot do and have no need for, since we already have those aesthetic sensibilities built in.

This is the fundamental error of using an LLM as a ghostwriter. Humans don't only bring inspiration to the table—they also bring the aesthetic judgement which shapes the final product. Sentences written by an LLM are banal sentences, no matter how you prompt it.

replies(2): >>44015889 #>>44017527 #
1. imperfect_blue ◴[] No.44017527[source]
As an amateur home-cook, I find current LLMs incredibly useful as a sounding board for the on-the-fly recipe modifications - for allergies and food sensitivities, adapting preparation methods to available equipment, or substituting produce not available in season. It may not be able to taste the final product, but its reasoning on what's likely to work (and what isn't) has not led me wrong so far.