←back to thread

323 points lermontov | 1 comments | | HN request time: 0.001s | source
Show context
mmastrac ◴[] No.41906276[source]
I started a quick transcription here -- not enough time to complete more than half the first column, but some scans and very rough OCR are here if anyone is interested in contributing:

https://github.com/mmastrac/gibbet-hill

Top and bottom halves of the page in the repo here:

https://github.com/mmastrac/gibbet-hill/blob/main/scan-1.png https://github.com/mmastrac/gibbet-hill/blob/main/scan-2.png

EDIT: If you have access to a multi-modal LLM, the rough transcription + the column scan and the instruction to "OCR this text, keep linebreaks" gives a _very good_ result.

EDIT 2: Rough draft, needs some proofreading and corrections:

https://github.com/mmastrac/gibbet-hill/blob/main/story.md

replies(6): >>41906561 #>>41907098 #>>41907235 #>>41908097 #>>41908454 #>>41918290 #
quuxplusone ◴[] No.41907098[source]
Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types. :) I'd volunteer, if I didn't think I'd be duplicating effort at this point.

(I've transcribed various things over the years, including Sonia Greene's Alcestis [1] and Holtzman & Kershenblatt's "Castlequest" source code [2], so I know it doesn't take much except quick fingers and sufficient motivation. :))

[1] https://quuxplusone.github.io/blog/2022/10/22/alcestis/

[2] https://quuxplusone.github.io/blog/2021/03/09/castlequest/

EDIT: ...and as I was writing that, you seem to have finished your transcription. :)

replies(2): >>41907134 #>>41911812 #
eru ◴[] No.41911812[source]
> Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types.

LLMs are increasingly becoming cheaper and more accessible than humans with a baseline of literacy.

replies(1): >>41912668 #
notachatbot123 ◴[] No.41912668[source]
They are also nowhere as good. Not everything has to be solved by cheap* technological processes.

*: If you ignore the environmental costs.

replies(1): >>41913019 #
eru ◴[] No.41913019[source]
> They are also nowhere as good.

They are better than me at many tasks.

> Not everything has to be solved by cheap* technological processes.

> *: If you ignore the environmental costs.

For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

replies(3): >>41913114 #>>41913213 #>>41917033 #
CoastalCoder ◴[] No.41913213[source]
If you're looking at it simply from a resource standpoint, we should ask what those humans would be doing otherwise.

I'm assuming that powering them down isn't a viable option, unlike with GPUs in a datacenter.

replies(2): >>41913861 #>>41922257 #
1. eru ◴[] No.41922257[source]
Yes, opportunity costs are important.

Presumably the humans would be enjoying with some other activity. Eg they could be working on carbon capturing projects? Or producing electric power via pedaling, etc. I don't know.

I was purely talking about the environmental impact of this one activity.