←back to thread

323 points lermontov | 10 comments | | HN request time: 0.001s | source | bottom
Show context
mmastrac ◴[] No.41906276[source]
I started a quick transcription here -- not enough time to complete more than half the first column, but some scans and very rough OCR are here if anyone is interested in contributing:

https://github.com/mmastrac/gibbet-hill

Top and bottom halves of the page in the repo here:

https://github.com/mmastrac/gibbet-hill/blob/main/scan-1.png https://github.com/mmastrac/gibbet-hill/blob/main/scan-2.png

EDIT: If you have access to a multi-modal LLM, the rough transcription + the column scan and the instruction to "OCR this text, keep linebreaks" gives a _very good_ result.

EDIT 2: Rough draft, needs some proofreading and corrections:

https://github.com/mmastrac/gibbet-hill/blob/main/story.md

replies(6): >>41906561 #>>41907098 #>>41907235 #>>41908097 #>>41908454 #>>41918290 #
quuxplusone ◴[] No.41907098[source]
Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types. :) I'd volunteer, if I didn't think I'd be duplicating effort at this point.

(I've transcribed various things over the years, including Sonia Greene's Alcestis [1] and Holtzman & Kershenblatt's "Castlequest" source code [2], so I know it doesn't take much except quick fingers and sufficient motivation. :))

[1] https://quuxplusone.github.io/blog/2022/10/22/alcestis/

[2] https://quuxplusone.github.io/blog/2021/03/09/castlequest/

EDIT: ...and as I was writing that, you seem to have finished your transcription. :)

replies(2): >>41907134 #>>41911812 #
eru ◴[] No.41911812[source]
> Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types.

LLMs are increasingly becoming cheaper and more accessible than humans with a baseline of literacy.

replies(1): >>41912668 #
1. notachatbot123 ◴[] No.41912668[source]
They are also nowhere as good. Not everything has to be solved by cheap* technological processes.

*: If you ignore the environmental costs.

replies(1): >>41913019 #
2. eru ◴[] No.41913019[source]
> They are also nowhere as good.

They are better than me at many tasks.

> Not everything has to be solved by cheap* technological processes.

> *: If you ignore the environmental costs.

For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

replies(3): >>41913114 #>>41913213 #>>41917033 #
3. homebrewer ◴[] No.41913114[source]
This feels like it was taken from the brave new world. Humans are already around, unless we're going to kill one off for every job replaced by an LLM, I'm not seeing how it is going to reduce environmental footprint.
replies(2): >>41920734 #>>41993109 #
4. CoastalCoder ◴[] No.41913213[source]
If you're looking at it simply from a resource standpoint, we should ask what those humans would be doing otherwise.

I'm assuming that powering them down isn't a viable option, unlike with GPUs in a datacenter.

replies(2): >>41913861 #>>41922257 #
5. throwaway0123_5 ◴[] No.41913861{3}[source]
> I'm assuming that powering them down isn't a viable option

Sadly that might be assuming too much... here and on reddit I've seen a handful of people who have said that we should continue with AI progress even if it causes the extinction of humans, because we'll have ~"contributed to spreading intelligence throughout the universe and it doesn't really matter if it is human or not."

With that as the extreme end of the spectrum, I suspect the group of people who simply aren't considering what happens to obsoleted humans is much larger, and corporations certainly haven't demonstrated much interest in caring for those who technology has obsoleted in the past.

Tbh it is really disheartening to see so many technologists who seemingly only care about technology for its own sake.

6. arp242 ◴[] No.41917033[source]
> For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

Obviously not true because that human is alive regardless, and has mostly the same base energy needs no matter what they're doing.

Reducing humans to just energy-using machines is an absolutely insane misanthropic take.

replies(1): >>41920739 #
7. eru ◴[] No.41920734{3}[source]
That's a weird conclusion to make. Are you parodying Brave New World here? Because they use a lot of human labour. (The book talks about not using labour saving devices, because that would give people to much freetime, but they also talk about not breeding only 'alphas' because they wouldn't want to do the menial work. They leave the reader to figure out that you should combine both of the failed ideas to get one that works.)

We can reduce the environmental footprint of specific activities by replacing humans. Yes, we would only reduce the _overall_ footprint by reducing the number of humans.

8. eru ◴[] No.41920739{3}[source]
Huh? I am talking about the environmental footprint for this activity.

The human would presumably do something else they enjoy doing more, if a machine took that specific job.

9. eru ◴[] No.41922257{3}[source]
Yes, opportunity costs are important.

Presumably the humans would be enjoying with some other activity. Eg they could be working on carbon capturing projects? Or producing electric power via pedaling, etc. I don't know.

I was purely talking about the environmental impact of this one activity.

10. cultureswitch ◴[] No.41993109{3}[source]
Reminds me of this technically true but still bizarre finding about electric bikes being ultimately more energy-efficient than human-powered bikes.