←back to thread

289 points lermontov | 1 comments | | HN request time: 0.202s | source
Show context
mmastrac ◴[] No.41906276[source]
I started a quick transcription here -- not enough time to complete more than half the first column, but some scans and very rough OCR are here if anyone is interested in contributing:

https://github.com/mmastrac/gibbet-hill

Top and bottom halves of the page in the repo here:

https://github.com/mmastrac/gibbet-hill/blob/main/scan-1.png https://github.com/mmastrac/gibbet-hill/blob/main/scan-2.png

EDIT: If you have access to a multi-modal LLM, the rough transcription + the column scan and the instruction to "OCR this text, keep linebreaks" gives a _very good_ result.

EDIT 2: Rough draft, needs some proofreading and corrections:

https://github.com/mmastrac/gibbet-hill/blob/main/story.md

replies(5): >>41906561 #>>41907098 #>>41907235 #>>41908097 #>>41908454 #
quuxplusone ◴[] No.41907098[source]
Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types. :) I'd volunteer, if I didn't think I'd be duplicating effort at this point.

(I've transcribed various things over the years, including Sonia Greene's Alcestis [1] and Holtzman & Kershenblatt's "Castlequest" source code [2], so I know it doesn't take much except quick fingers and sufficient motivation. :))

[1] https://quuxplusone.github.io/blog/2022/10/22/alcestis/

[2] https://quuxplusone.github.io/blog/2021/03/09/castlequest/

EDIT: ...and as I was writing that, you seem to have finished your transcription. :)

replies(2): >>41907134 #>>41911812 #
eru ◴[] No.41911812[source]
> Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types.

LLMs are increasingly becoming cheaper and more accessible than humans with a baseline of literacy.

replies(1): >>41912668 #
notachatbot123 ◴[] No.41912668[source]
They are also nowhere as good. Not everything has to be solved by cheap* technological processes.

*: If you ignore the environmental costs.

replies(1): >>41913019 #
eru ◴[] No.41913019[source]
> They are also nowhere as good.

They are better than me at many tasks.

> Not everything has to be solved by cheap* technological processes.

> *: If you ignore the environmental costs.

For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

replies(2): >>41913114 #>>41913213 #
1. homebrewer ◴[] No.41913114[source]
This feels like it was taken from the brave new world. Humans are already around, unless we're going to kill one off for every job replaced by an LLM, I'm not seeing how it is going to reduce environmental footprint.