←back to thread

899 points georgehill | 4 comments | | HN request time: 0.757s | source
Show context
rvz ◴[] No.36215936[source]
> Nat Friedman and Daniel Gross provided the pre-seed funding.

Why? Why should VCs get involved again?

They are just going to look for an exit and end up getting acquired by Apple Inc.

Not again.

replies(5): >>36215977 #>>36216061 #>>36216214 #>>36216267 #>>36239156 #
1. okhuman ◴[] No.36216267[source]
+1. VC involvement in projects like these always pivot the team away from the core competency of what you'd expect them to deliver - into some commercialization aspect that convert only a tiny fraction of the community yet take up 60%+ of the core developer team's time.

I don't know why project founders head this way...as the track records of leaders who do this end up disappointing the involved community at some point. Look to matt klein + cloud native computing foundation at envoy for a somewhat decent model of how to do this better.

We continue down the Open Core model yet it continues to fail communities.

replies(2): >>36216886 #>>36218615 #
2. wmf ◴[] No.36216886[source]
Developers shouldn't be unpaid slaves to the community.
replies(1): >>36217483 #
3. okhuman ◴[] No.36217483[source]
You're right. I just wish this decision was taken to the community, we could have all came together to help and supported during these difficult/transitional times. :( Maybe this decision was rushed or is money related, who knows the actual circumstances.

Here's the Matt K article https://mattklein123.dev/2021/09/14/5-years-envoy-oss/

4. jart ◴[] No.36218615[source]
Whenever a community project goes commercial, its interests are usually no longer aligned with the community. For example, llama.com makes frequent backwards-incompatible changes to its file format. I maintain a fork of ggml in the cosmopolitan monorepo which maintains support for old file formats. You can build and use it as follows:

    git clone https://github.com/jart/cosmopolitan
    cd cosmopolitan

    # cross-compile on x86-64-linux for x86-64 linux+windows+macos+freebsd+openbsd+netbsd
    make -j8 o//third_party/ggml/llama.com
    o//third_party/ggml/llama.com --help

    # cross-compile on x86-64-linux for aarch64-linux
    make -j8 m=aarch64 o/aarch64/third_party/ggml/llama.com
    # note: creates .elf file that runs on RasPi, etc.

    # compile loader shim to run on arm64 macos
    cc -o ape ape/ape-m1.c   # use xcode
    ./ape ./llama.com --help # use elf aarch64 binary above
It goes the same speed as upstream for CPU inference. This is useful if you can't/won't recreate your weights files, or want to download old GGML weights off HuggingFace, since llama.com has support for every generation of the ggjt file format.
replies(1): >>36218893 #