Most active commenters

    ←back to thread

    Meta's open AI hardware vision

    (engineering.fb.com)
    212 points GavCo | 17 comments | | HN request time: 0.748s | source | bottom
    1. Gee101 ◴[] No.41851793[source]
    Zuckerberg and Facebook gets a lot of hate but at least they invest a lot into engineering and open source.
    replies(6): >>41852156 #>>41852273 #>>41852671 #>>41852972 #>>41853469 #>>41855820 #
    2. threeseed ◴[] No.41852156[source]
    Also the fact that he is delivering on Fediverse integration with Threads.

    I don't think most people expected that to happen so quickly or frankly at all.

    replies(2): >>41852333 #>>41852347 #
    3. amelius ◴[] No.41852273[source]
    I personally think our academia should be training and curating these kinds of models and the data they are based on, but this is an acceptable second best.
    replies(3): >>41853103 #>>41853566 #>>41854970 #
    4. KaiserPro ◴[] No.41852333[source]
    Neither did the company.

    Most new products fail at meta, because they become a "priority", throw thousands of engineers at the problem and get bogged down in managing a massive oversubscription of engineers to useful work ratio.

    Threads happened because a few people managed to convince each other to take a risk and build a instagram/mastodon chimera. They managed to show enough progress to continue without getting fucked in the performance review, but not enough for an exec to get excited about building an empire around it.

    5. oersted ◴[] No.41852347[source]
    VR also gets a lot of hate, and they definitely dropped the ball on user-facing software, but Meta is doing very substantial, deep and valuable long-term R&D on VR hardware. They are also doing a lot on systems software, with their OS and all the low-key AI that enables excellent real-time tracking, rendering and AR.

    It might not all be open-source, and they are doing it with an expectation of long-term profit, but they are earnestly pushing the horizons (pun intended) of the field and taking-on lots of risk for everyone else.

    It's undeniable now that they are a serious and innovative engineering organization, while Google is rapidly loosing that reputation.

    6. KptMarchewa ◴[] No.41852671[source]
    Aggressively commoditizing the complement has been a good strategy for them.
    replies(1): >>41853376 #
    7. diggan ◴[] No.41852972[source]
    > and open source

    I'm kind of split about this. Yes, Facebook done a lot of great Open Source in the past, and I'm sure they'll do more great Open Source in the future.

    But it's really hard to see them in a positive light when they keep misleading people about Llama, and publish blog posts that say how important Open Source is etc etc, then refuse to actually release Llama as Open Source, refuse to elaborate on why they see it as Open Source while no one else does it and refuse to take a step back and understand how the FOSS community feels when they actively mislead people like this.

    replies(1): >>41854421 #
    8. 123yawaworht456 ◴[] No.41853103[source]
    there's already https://www.goody2.ai/chat
    9. lossolo ◴[] No.41853376[source]
    > good strategy for them.

    For everyone, besides OpenAI etc.

    10. the_clarence ◴[] No.41853469[source]
    The hate is mostly from people who don't use their products. Plenty of people are happy users
    replies(1): >>41854824 #
    11. asdff ◴[] No.41853566[source]
    IMO there are much better ways to spend 300 million in research beyond firing up a cluster for 60 days to train on internet content.
    replies(1): >>41855080 #
    12. lolinder ◴[] No.41854421[source]
    What a lot of people complain about with Llama is the fact that the weights are open but not the training data and training code. That feels like a red herring to me—code is data and data is code, and we shouldn't require someone to be developing entirely in the open in order for the output to be open source.

    The weights are the "preferred form of the work for making modifications to it", to quote the GPL. The rest is just the infrastructure used to produce the work.

    Where "open source" is misleading with Llama is that it's restricted to companies under a certain size and has restrictions for what you can and can't do with it. That kind of restriction undermines the freedoms promised by the phrase "open source", and it's concerning to me that people have gotten so fixating on weights vs data when there's a big gap in the freedoms offered on the weights.

    13. esafak ◴[] No.41854824[source]
    Do you realize that's a tautology? Why would you use their products if you hate them?
    replies(1): >>41854903 #
    14. Bayko ◴[] No.41854903{3}[source]
    Why not?! I absolutely hate reddit but still can't stop using it.
    15. mistrial9 ◴[] No.41854970[source]
    basically no one in the entire world was willing to spend the kind of money on massive compute and data centers that Meta did spend, is spending and will spend. The actual numbers are (I think) rare to find and so large that it is hard to comprehend it.
    16. nickpsecurity ◴[] No.41855080{3}[source]
    Spending $100 million one time on a GPT4-level model that is open-source would help with a lot of that research. Especially after all the 3rd party groups fine-tuned it or layered their tools on it.

    I think the Copilot-equivalent tools alone would make it quickly pay itself off in productivity gains. Research summaries, PDF extraction, and OCR would add more to that.

    17. deepfriedchokes ◴[] No.41855820[source]
    Zuckerberg does what’s good for Zuckerberg. We should all have zero reservations about what kind of person this guy is at this point. If open source is beneficial he’ll do that, but when it stops being beneficial you can count on him to do what’s best for himself at the expense of the general public.

    Zuckerberg is just following the Bezos strategy of someone else’s margin being his opportunity. This open source move is predatory.