Most active commenters

    ←back to thread

    584 points Alifatisk | 13 comments | | HN request time: 0.653s | source | bottom
    Show context
    okdood64 ◴[] No.46181759[source]
    From the blog:

    https://arxiv.org/abs/2501.00663

    https://arxiv.org/pdf/2504.13173

    Is there any other company that's openly publishing their research on AI at this level? Google should get a lot of credit for this.

    replies(12): >>46181829 #>>46182057 #>>46182168 #>>46182358 #>>46182633 #>>46183087 #>>46183462 #>>46183546 #>>46183827 #>>46184875 #>>46186114 #>>46189989 #
    mapmeld ◴[] No.46182168[source]
    Well it's cool that they released a paper, but at this point it's been 11 months and you can't download a Titans-architecture model code or weights anywhere. That would put a lot of companies up ahead of them (Meta's Llama, Qwen, DeepSeek). Closest you can get is an unofficial implementation of the paper https://github.com/lucidrains/titans-pytorch
    replies(7): >>46182351 #>>46182946 #>>46184154 #>>46185017 #>>46186942 #>>46187280 #>>46188385 #
    1. alyxya ◴[] No.46182946[source]
    The hardest part about making a new architecture is that even if it is just better than transformers in every way, it’s very difficult to both prove a significant improvement at scale and gain traction. Until google puts in a lot of resources into training a scaled up version of this architecture, I believe there’s plenty of low hanging fruit with improving existing architectures such that it’ll always take the back seat.
    replies(5): >>46183227 #>>46184404 #>>46184696 #>>46186138 #>>46186853 #
    2. UltraSane ◴[] No.46183227[source]
    Yes. The path dependence for current attention based LLMs is enormous.
    replies(1): >>46184174 #
    3. patapong ◴[] No.46184174[source]
    At the same time, there is now a ton of data for training models to act as useful assistants, and benchmarks to compare different assistant models. The wide availability and ease of obtaining new RLHF training data will make it more feasible to build models on new architectures I think.
    4. p1esk ◴[] No.46184404[source]
    Until google puts in a lot of resources into training a scaled up version of this architecture

    If Google is not willing to scale it up, then why would anyone else?

    replies(1): >>46187379 #
    5. tyre ◴[] No.46184696[source]
    Google is large enough, well-funded enough, and the opportunity is great enough to run experiments.

    You don't necessarily have to prove it out on large foundation models first. Can it beat out a 32b parameter model, for example?

    replies(1): >>46185008 #
    6. swatcoder ◴[] No.46185008[source]
    Do you think there might be an approval process to navigate when experiments costs might run seven or eight digits and months of reserved resources?

    While they do have lots of money and many people, they don't have infinite money and specifically only have so much hot infrastructure to spread around. You'd expect they have to gradually build up the case that a large scale experiment is likely enough to yield a big enough advantage over what's already claiming those resources.

    replies(2): >>46189610 #>>46191181 #
    7. nickpsecurity ◴[] No.46186138[source]
    But, it's companies like Google that made tools like Jax and TPU's saying we can throw together models with cheap, easy scaling. Their paper's math is probably harder to put together than an alpha-level prototype which they need anyway.

    So, I think they could default on doing it for small demonstrators.

    8. m101 ◴[] No.46186853[source]
    Prove it beats models of different architectures trained under identical limited resources?
    9. 8note ◴[] No.46187379[source]
    chatgpt is an example on why.
    replies(1): >>46193265 #
    10. dpe82 ◴[] No.46189610{3}[source]
    I would imagine they do not want their researchers unnecessarily wasting time fighting for resources - within reason. And at Google, "within reason" can be pretty big.
    replies(1): >>46190731 #
    11. howdareme ◴[] No.46190731{4}[source]
    I mean looking antigravity, jules & gemini cli, they have have no problem with their developers fighting for resources
    12. nl ◴[] No.46191181{3}[source]
    I mean you'd think so, but...

    > In fact, the UL2 20B model (at Google) was trained by leaving the job running accidentally for a month.

    https://www.yitay.net/blog/training-great-llms-entirely-from...

    13. falcor84 ◴[] No.46193265{3}[source]
    You think that this might be another ChatGPT/Docker/Hadoop case, where Google comes up with the technology but doesn't care to productize it?