←back to thread

159 points jbredeche | 5 comments | | HN request time: 0.394s | source
1. xnx ◴[] No.45531447[source]
We are at a weird moment where the latency of the response is slow enough that we're anthropomorphizing AI code assistants into employees. We don't talk about image generation this way. With images, its batching up a few jobs and reviewing the results later. We don't say "I spun up a bunch of AI artists."
replies(2): >>45531611 #>>45532016 #
2. radarsat1 ◴[] No.45531611[source]
Are there any semi autonomous agentic systems for image generation? I feel like mostly it's still a one shot deal but maybe there's an idea there.

I guess Adobe is working on it. Maybe Figma too.

replies(1): >>45531654 #
3. xnx ◴[] No.45531654[source]
That's part of my point. You don't need to conceptualize something as an "agent" that goes off and does work on its own when the latency is less than 2 seconds.
4. laterium ◴[] No.45532016[source]
As a follow-up, how would this workflow feel if the LLM generation were instantenous or cost nothing? What would the new bottleneck be? Running the tests? Network speed? The human reviewer?
replies(1): >>45532837 #
5. simonw ◴[] No.45532837[source]
You can get a glimpse of that by trying one of the wildly performant LLM providers - most notably Cerebras and Groq, or the Gemini Diffusion preview.

I have videos showing Cerebras: https://simonwillison.net/2024/Oct/31/cerebras-coder/ and Gemini Diffusion: https://simonwillison.net/2025/May/21/gemini-diffusion/