←back to thread

323 points lermontov | 2 comments | | HN request time: 0s | source
Show context
mmastrac ◴[] No.41906276[source]
I started a quick transcription here -- not enough time to complete more than half the first column, but some scans and very rough OCR are here if anyone is interested in contributing:

https://github.com/mmastrac/gibbet-hill

Top and bottom halves of the page in the repo here:

https://github.com/mmastrac/gibbet-hill/blob/main/scan-1.png https://github.com/mmastrac/gibbet-hill/blob/main/scan-2.png

EDIT: If you have access to a multi-modal LLM, the rough transcription + the column scan and the instruction to "OCR this text, keep linebreaks" gives a _very good_ result.

EDIT 2: Rough draft, needs some proofreading and corrections:

https://github.com/mmastrac/gibbet-hill/blob/main/story.md

replies(6): >>41906561 #>>41907098 #>>41907235 #>>41908097 #>>41908454 #>>41918290 #
quuxplusone ◴[] No.41907098[source]
Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types. :) I'd volunteer, if I didn't think I'd be duplicating effort at this point.

(I've transcribed various things over the years, including Sonia Greene's Alcestis [1] and Holtzman & Kershenblatt's "Castlequest" source code [2], so I know it doesn't take much except quick fingers and sufficient motivation. :))

[1] https://quuxplusone.github.io/blog/2022/10/22/alcestis/

[2] https://quuxplusone.github.io/blog/2021/03/09/castlequest/

EDIT: ...and as I was writing that, you seem to have finished your transcription. :)

replies(2): >>41907134 #>>41911812 #
eru ◴[] No.41911812[source]
> Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types.

LLMs are increasingly becoming cheaper and more accessible than humans with a baseline of literacy.

replies(1): >>41912668 #
notachatbot123 ◴[] No.41912668[source]
They are also nowhere as good. Not everything has to be solved by cheap* technological processes.

*: If you ignore the environmental costs.

replies(1): >>41913019 #
eru ◴[] No.41913019[source]
> They are also nowhere as good.

They are better than me at many tasks.

> Not everything has to be solved by cheap* technological processes.

> *: If you ignore the environmental costs.

For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

replies(3): >>41913114 #>>41913213 #>>41917033 #
1. arp242 ◴[] No.41917033[source]
> For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

Obviously not true because that human is alive regardless, and has mostly the same base energy needs no matter what they're doing.

Reducing humans to just energy-using machines is an absolutely insane misanthropic take.

replies(1): >>41920739 #
2. eru ◴[] No.41920739[source]
Huh? I am talking about the environmental footprint for this activity.

The human would presumably do something else they enjoy doing more, if a machine took that specific job.