Most active commenters

    ←back to thread

    221 points lnyan | 14 comments | | HN request time: 1.659s | source | bottom
    Show context
    rushingcreek ◴[] No.44397235[source]
    It doesn't seem to have open weights, which is unfortunate. One of Qwen's strengths historically has been their open-weights strategy, and it would have been great to have a true open-weights competitor to 4o's autoregressive image gen. There are so many interesting research directions that are only possible if we can get access to the weights.

    If Qwen is concerned about recouping its development costs, I suggest looking at BFL's Flux Kontext Dev release from the other day as a model: let researchers and individuals get the weights for free and let startups pay for a reasonably-priced license for commercial use.

    replies(4): >>44397843 #>>44397858 #>>44397893 #>>44398602 #
    1. Jackson__ ◴[] No.44397843[source]
    It's also very clearly trained on OAI outputs, which you can tell from the orange tint to the images[0]. Did they even attempt to come up with their own data?

    So it is trained off OAI, as closed off as OAI and most importantly: worse than OAI. What a bizarre strategy to gate-keep this behind an API.

    [0]

    https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

    https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

    https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VLo/cas...

    replies(5): >>44397961 #>>44398084 #>>44398731 #>>44401456 #>>44418786 #
    2. echelon ◴[] No.44397961[source]
    The way they win is to be open. I don't get why China is shutting down open source. It was a knife at the jugular of US tech dominance.

    Both Alibaba and Tencent championed open source (Qwen family of models, Hunyuan family of models), but now they've shut off the releases.

    There's totally a play where models become loss-leader for SaaS/PaaS/IaaS and where they extinguish your closed competition.

    Imagine spreading your model so widely then making the terms: "do not use in conjunction with closed source models".

    replies(3): >>44398211 #>>44399036 #>>44401766 #
    3. vachina ◴[] No.44398084[source]
    Huh, so orange tint = openAI output? Maybe their training process ended up causing the model to prefer that color balance.
    replies(1): >>44398560 #
    4. diggan ◴[] No.44398211[source]
    > I don't get why China is shutting down open source [...] now they've shut off the releases

    What are you talking about? Feels like a very strong claim considering there are ongoing weight releases, wasn't there one just today or yesterday from a Chinese company?

    5. Jackson__ ◴[] No.44398560[source]
    Here's an extreme example that shows how it continually adds more orange: https://old.reddit.com/r/ChatGPT/comments/1kawcng/i_went_wit...

    It's really too close to be anything but a model trained on these outputs, the whole vibe just screams OAI.

    replies(1): >>44399782 #
    6. VladVladikoff ◴[] No.44398731[source]
    What would be the approximate cost of doing this? How many million API requests must be made? How many tokens in total?
    replies(1): >>44399468 #
    7. yorwba ◴[] No.44399036[source]
    The problem with giving away weights for free while also offering a hosted API is that once the weights are out there, anyone else can also offer it as a hosted API with similar operating costs, but only the releasing company had the initial capital outlay of training the model. So everyone else is more profitable! That's not a good business strategy.

    New entrants may keep releasing weights as a marketing strategy to gain name recognition, but once they have established themselves (and investors start getting antsy about ROI) making subsequent releases closed is the logical next step.

    replies(1): >>44401469 #
    8. refulgentis ◴[] No.44399468[source]
    Most pedantically correct answer is "mu", because the answers are both derivable quantitively from "How many images do you want to train on?", which is answered by a qualitative question that doesn't admit numbers ("How high quality do you want it to be?")

    Let's say it's 100 images because you're doing a quick LoRA. That'd be about $5.00 at medium quality (~$0.05/image) or $1 at low. ~($0.01/image)

    Let's say you're training a standalone image model. OOM of input images is ~1B, so $10M at low and $50M at high.

    250 tokens / image for low, ~1000 for medium, which gets us to:

    Fastest LoRA? $1-$4. 25,000 - 100,000 tokens output. All the training data for a new image model? $10M-$50M, 2.5B - 10B tokens out.

    9. acheong08 ◴[] No.44399782{3}[source]
    That form of collapse might just be inherent to the methodology. Releasing the weights would be nice so people can figure out why
    10. roenxi ◴[] No.44401456[source]
    There seem to be a lot of AI images on the web these days and it might have become the single most dominant style given that AI has created more images than any individual human artist. So they might have trained on them implicitly rather than in a synthetic way.

    Although theory is not practice. If I were an AI company I'd try to leverage other AI company APIs.

    11. roenxi ◴[] No.44401469{3}[source]
    That is also how open source works in other contexts. Initially closed source is dominant, then over time other market entrants use OSS solutions to break down the incumbent advantage.

    In this case I'm expecting people with huge pools of capital (the big cloud providers) to push out open models because the weights are a commodity then people will rent their servers to multiply them together.

    replies(1): >>44402420 #
    12. rfv6723 ◴[] No.44401766[source]
    If you have worked or lived in China, you will know that Chinese open-source software industry is a totally shitshow.

    The law in China offers little protection for open-source software. Lots of companies use open-source code in production without proper license, and there is no consequence.

    Western internet influencers hype up Chinese open-source software industry for clicks while Chinese open-source developers are struggling.

    These open-weight model series are planed as free-trial from the start, there is no commitment to open-source.

    replies(1): >>44403779 #
    13. yorwba ◴[] No.44402420{4}[source]
    Even for a big cloud provider, putting out model weights and hoping that people host with them is unlikely to be as profitable as gating it behind an API that guarantees that people using the model are using their hosted version. How many people self-hosting Qwen models are doing so on Aliyun?
    14. diggan ◴[] No.44403779{3}[source]
    > Western internet influencers hype up Chinese open-source software industry for clicks while Chinese open-source developers are struggling.

    That kind of downplays that Chinese open weights are basically the only option for high quality weights you can run yourself, together with Mistral. It's not just influencers who are "hyping up Chinese open-source" but people go where the options are.

    > there is no commitment to open-source

    Welcome to open source all around the world! Plenty of non-Chinese projects start as FOSS and then slowly move into either fully proprietary or some hybrid-model, that isn't exactly new nor not expected, Western software industry even pioneered a new license (BSL - https://en.wikipedia.org/wiki/Business_Source_License) that tries to look as open source as possible while not actually being open source.