←back to thread

70 points Zephyrion | 2 comments | | HN request time: 0.001s | source

Hi HN,

After spending some time with the new `flux kontext dev` model, I realized its most powerful capabilities aren't immediately obvious. Many people might miss its true potential by just scratching the surface.

I went deep and curated a collection of what I think are its most interesting use cases – things like targeted text removal, subtle photo restoration, and creative style transfers.

I felt that simply writing about them wasn't enough. The best way to understand the value is to see it and try it for yourself.

That's why I built FluxKontextLab (https://fluxkontextlab.com).

On the site, I've presented these curated examples with before-and-after comparisons. More importantly, there's an interactive playground right there, so you can immediately test these ideas or your own prompts on your own images.

My goal is to share what this model is capable of beyond the basics.

It's still an early project. I'd love for you to take a look and share your thoughts or any cool results you generate.

1. vunderba ◴[] No.44517350[source]
Kontext's ability to make InstructPix2Pix [1] level changes to isolated sections of an image without affecting the rest of the image is a game changer. Saves a ton of time without needing to go through the effort of masking/inpainting.

About a month ago I put together a quick before/after set of images that I used Kontext to edit. It even works on old grainy film footage.

https://specularrealms.com/ai-transcripts/experiments-with-f...

> My goal is to share what this model is capable of beyond the basics.

You might be interested to know that it looks like it has limited support for being able to upload/composite multiple images together.

https://fal.ai/models/fal-ai/flux-pro/kontext/max/multi

[1] https://github.com/timothybrooks/instruct-pix2pix

replies(1): >>44520685 #
2. mpeg ◴[] No.44520685[source]
I had a project for a big brand a couple years ago where we experimented with genai and inpainting and it was a huge hassle to get it working right, required a big comfy pipeline with masking, then inpanting, then doing relighting to make it all look natural, etc.

It's crazy how fast genai moves, now you can do all that with just flux and the end result looks extremely high quality