←back to thread

289 points lermontov | 2 comments | | HN request time: 0.408s | source
Show context
mmastrac ◴[] No.41906276[source]
I started a quick transcription here -- not enough time to complete more than half the first column, but some scans and very rough OCR are here if anyone is interested in contributing:

https://github.com/mmastrac/gibbet-hill

Top and bottom halves of the page in the repo here:

https://github.com/mmastrac/gibbet-hill/blob/main/scan-1.png https://github.com/mmastrac/gibbet-hill/blob/main/scan-2.png

EDIT: If you have access to a multi-modal LLM, the rough transcription + the column scan and the instruction to "OCR this text, keep linebreaks" gives a _very good_ result.

EDIT 2: Rough draft, needs some proofreading and corrections:

https://github.com/mmastrac/gibbet-hill/blob/main/story.md

replies(5): >>41906561 #>>41907098 #>>41907235 #>>41908097 #>>41908454 #
quuxplusone ◴[] No.41907098[source]
Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types. :) I'd volunteer, if I didn't think I'd be duplicating effort at this point.

(I've transcribed various things over the years, including Sonia Greene's Alcestis [1] and Holtzman & Kershenblatt's "Castlequest" source code [2], so I know it doesn't take much except quick fingers and sufficient motivation. :))

[1] https://quuxplusone.github.io/blog/2022/10/22/alcestis/

[2] https://quuxplusone.github.io/blog/2021/03/09/castlequest/

EDIT: ...and as I was writing that, you seem to have finished your transcription. :)

replies(2): >>41907134 #>>41911812 #
eru ◴[] No.41911812[source]
> Seems like you don't need an LLM, you just need a human who (1) likes reading Stoker and (2) touch-types.

LLMs are increasingly becoming cheaper and more accessible than humans with a baseline of literacy.

replies(1): >>41912668 #
notachatbot123 ◴[] No.41912668[source]
They are also nowhere as good. Not everything has to be solved by cheap* technological processes.

*: If you ignore the environmental costs.

replies(1): >>41913019 #
eru ◴[] No.41913019[source]
> They are also nowhere as good.

They are better than me at many tasks.

> Not everything has to be solved by cheap* technological processes.

> *: If you ignore the environmental costs.

For many tasks, inference on an LLM is a lot cheaper (including for the environment) than keeping a human around to do them. As a baseline, humans by themselves take around 100W (just in food calories), but anyone but the poorest human also wants to consume eg housing and entertainment that consumes a lot more power than that.

replies(2): >>41913114 #>>41913213 #
1. CoastalCoder ◴[] No.41913213[source]
If you're looking at it simply from a resource standpoint, we should ask what those humans would be doing otherwise.

I'm assuming that powering them down isn't a viable option, unlike with GPUs in a datacenter.

replies(1): >>41913861 #
2. throwaway0123_5 ◴[] No.41913861[source]
> I'm assuming that powering them down isn't a viable option

Sadly that might be assuming too much... here and on reddit I've seen a handful of people who have said that we should continue with AI progress even if it causes the extinction of humans, because we'll have ~"contributed to spreading intelligence throughout the universe and it doesn't really matter if it is human or not."

With that as the extreme end of the spectrum, I suspect the group of people who simply aren't considering what happens to obsoleted humans is much larger, and corporations certainly haven't demonstrated much interest in caring for those who technology has obsoleted in the past.

Tbh it is really disheartening to see so many technologists who seemingly only care about technology for its own sake.