You should use something like openrouter or portkey or similar for managing fallbacks
You should use something like openrouter or portkey or similar for managing fallbacks
Wish they would have used a different name.
This would actually be great. So many researchers have a marketing problem with explaining and getting people excited for their work.
The content is usually reasonably strong but the tone is always off and it never quite understands what it is a reader/viewer needs to really get to grips with the topic if they don't already have a prior foundational understanding (though I notice this about a lot of other media outlets with professional science communicators too). It also has poor editorial thinking around what bits are most likely to be interesting and cohesive when considered as part of the whole piece.
But I'm still reasonably convinced as AI improves it ought to be able to replace me with the right workflow/context/prompting. I think there will always be a demand for my (and many other writers') talents as they are so it doesn't really bother me, but it'd be great to extend the work to all the many scientific discoveries that don't get the same attention. If anyone is serious about developing something like this, I'd be interested in partnering with them as someone with domain expertise on science communication and familiar with prompt engineering (email in bio).
I think you're right about the editorial thinking + what do people find interesting parts. But that doesn't have to be solved by directly by AI, it's easy enough to sidestep the problem and provide a nice interface for the human-in-the-loop part. I'd imagine that would save you a ton of time by having a nice starting point depending on how much you have to rewrite for tone.
LLMs make that much easier. As I collect primary sources during my drafting/writing phrase, I can type up any non-trivial claims I'm making in my script in a separate document, share that with the LLM and say "Quoting directly from the set of attached PDFs, identifying which document, and on which page the quote comes from, find content which directly supports each of these assertions" and it generally goes a great job. At any rate, I have to check each of those quotes for accuracy but the help in _finding_ those quotes in order to pass a stringent fact checking procedure is a huge help if I didn't scribble down the supporting quotes during my research phase. This is also, by the way, stricter than the fact checking process for most non-fiction publishing.
i thought it'd be cool to let people vote on ideas that HN Slop came up with, so now you'll see an "i'd invest" button & that will let others vote on the idea on a leaderboard
hope y'all like it, keep sending the feedback, I'm listening!
Instant fun! Honestly, whatever tech I look for, I use built-in search engine of Hacker News first before googling it.
>The content is usually reasonably strong but the tone is always off and it never quite understands what it is a reader/viewer needs
A SOTA model fine-tuned with your choice of transcripts could probably get you most of the way there. There might be a customized, open-weight model already on Huggingface that meets your needs.
Now there's a testimonial. I look forward to browsing the source links with each video!
Currently I'm working on an app for that, because thats where I listen to the MP3s anyways.
So "A Young Lady's Illustrated Primer" from Diamond Age, but for devs. A neat idea!
In general docs ecosystems tend to be heavy on only one of reference / explanation / tutorial. Would be cool to have a way to write one and get the others.
"DocuQuest: A platform that leverages LLMs to transform and simplify complex technical documentation into interactive, user-friendly learning experiences tailored for developers and engineers."
Not saying that it isn't possible, but stuff like this does need the human touch.
I'm not really sure why modern AI can't really do stuff like that anymore—I would guess a combination of being whacked with a crowbar to submit to humans (RLHF, but I'm not sure if it affects base models?), alignment stuff, and just being too smart.
Would love to see something like this with GPT-2 and how it compares
If the post had said "I made xyz which could auto generate domains for these ideas using ai to fully close the slop gap" I wouldn't mind and might even appreciate the additional fun.
This is the original hacker's ethos: put something out there for others to use, and let the people make their own opinion about the thing itself, not the authoritativeness of its source.
A game of learning your homelab into a cyberpunk mystery adventure | Hacker News https://share.google/WedMuRgx5WgreNqSN
Also, the product type you promoted is something that I've seen replicated many times, more often very badly. Mentioning such a product on an article aimed at AI slop seemed somewhat ironic.