Most active commenters

    ←back to thread

    Google is winning on every AI front

    (www.thealgorithmicbridge.com)
    993 points vinhnx | 12 comments | | HN request time: 0s | source | bottom
    Show context
    codelord ◴[] No.43661966[source]
    As an Ex-OpenAI employee I agree with this. Most of the top ML talent at OpenAI already have left to either do their own thing or join other startups. A few are still there but I doubt if they'll be around in a year. The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots. The whole time that I was at OpenAI until now GOOG has been the only individual stock that I've been holding. Despite the threat to their search business I think they'll bounce back because they have a lot of cards to play. OpenAI is an annoyance for Google, because they are willing to burn money to get users. Google can't as easily burn money, since they already have billions of users, but also they are a public company and have to answer to investors. But I doubt if OpenAI investors would sign up to give more money to be burned in a year. Google just needs to ease off on the red tape and make their innovations available to users as fast as they can. (And don't let me get started with Sam Altman.)
    replies(23): >>43661983 #>>43662449 #>>43662490 #>>43662564 #>>43662766 #>>43662930 #>>43662996 #>>43663473 #>>43663586 #>>43663639 #>>43663820 #>>43663824 #>>43664107 #>>43664364 #>>43664519 #>>43664803 #>>43665217 #>>43665577 #>>43667759 #>>43667990 #>>43668759 #>>43669034 #>>43670290 #
    imiric ◴[] No.43662490[source]
    > I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

    I also think adtech corrupting AI as well is inevitable, but I dread for that future. Chatbots are much more personal than websites, and users are expected to give them deeply personal data. Their output containing ads would be far more effective at psychological manipulation than traditional ads are. It would also be far more profitable, so I'm sure that marketers are salivating at this opportunity, and adtech masterminds are hard at work to make this a reality already.

    The repercussions of this will be much greater than we can imagine. I would love to be wrong, so I'm open to being convinced otherwise.

    replies(6): >>43662666 #>>43663407 #>>43663499 #>>43663987 #>>43664442 #>>43665390 #
    wkat4242 ◴[] No.43663499[source]
    Yeah me too and especially with Google as a leader because they corrupt everything.

    I hope local models remain viable. I don't think ever expanding the size is the way forward anyway.

    replies(2): >>43663540 #>>43663760 #
    1. coliveira ◴[] No.43663540[source]
    Once again, our hope is for the Chinese to continue driving the open models. Because if it depends on American big companies the future will be one of dependency on closed AI models.
    replies(3): >>43663620 #>>43663859 #>>43664675 #
    2. imiric ◴[] No.43663620[source]
    You can't be serious... You think models built by companies from an autocracy are somehow better? I suppose their biases and censorship are easier to spot, but I wouldn't trade one form of influence over another.

    Besides, Meta is currently the leader in open-source/weight models. There's no reason that US companies can't continue to innovate in this space.

    replies(1): >>43663805 #
    3. JKCalhoun ◴[] No.43663805[source]
    To play devil's advocate, I have a sense that a state LLM would be untrustworthy when the query is ideological but if it is ad-focused, a capitalist LLM may well corrupt every chat.
    replies(2): >>43665246 #>>43667183 #
    4. chuckadams ◴[] No.43663859[source]
    Ask Deepseek what happened in Tianmen Square in 1989 and get back to me about that "open" thing.
    replies(2): >>43664129 #>>43667204 #
    5. coliveira ◴[] No.43664129[source]
    who cares, only ideologues care about this.
    replies(2): >>43664453 #>>43664711 #
    6. chuckadams ◴[] No.43664453{3}[source]
    Caring about truth is indeed obsolete. I'm dropping out of this century.
    replies(1): >>43666220 #
    7. JSR_FDED ◴[] No.43664675[source]
    I’m not sure if it is the Chinese models themselves that will save us, or the or the effect they have of encouraging others to open source their models too.

    But I think we have to get away from the thinking that “Chinese models” are somehow created by the Chinese state, and from an adversarial standpoint. There are models created by Chinese companies, just like American and European companies.

    8. wkat4242 ◴[] No.43664711{3}[source]
    Yeah I'm sure every Chinese knows exactly what happened there.

    It's not really about suppressing the knowledge, it's about suppressing people talking about it and making it a point in the media etc. The CCP knows how powerful organised people can be, this is how they came to power after all.

    9. signatoremo ◴[] No.43665246{3}[source]
    The thing is Chinese LLMs aren't foreign to ad focused either, like those from Alibaba, Tencent or Bytedance. Now a North Korea's model may be what you want.
    10. mdp2021 ◴[] No.43666220{4}[source]
    > Caring about truth

    I suggest reducing the tolerance towards the insistence that opinions are legitimate. Normally, that is done through active debate and rebuttal. The poison has been spread through echochambers and lack of direct strong replies.

    In other terms: they let it happen, all the deliriousness of especially the past years was allowed to happen through silence, as if impotent shrugs...

    (By the way: I am not talking about "reticence", which is the occasional context here: I am talking about deliriousness, which is much worse than circumventing discussion over history. The real current issue is that of "reinventing history".)

    11. fragmede ◴[] No.43667183{3}[source]
    Which is why we can't let Mark Zuckerberg co-opt the term open source. If we can't see the code and dataset on how you've aligned the model during training, I don't care that you're giving it away for free, it's not open source!
    12. fragmede ◴[] No.43667204[source]
    How about we ask college students in America on visas about their opinions on Palestine instead?