←back to thread

214 points meetpateltech | 4 comments | | HN request time: 0.932s | source
Show context
polskibus ◴[] No.44370919[source]
What is the model architecture? I'm assuming it's far away from LLMs, but I'm curious about knowing more. Can anyone provide links that describe architectures for VLA?
replies(1): >>44371031 #
KoolKat23 ◴[] No.44371031[source]
Actually very close to one I'd say.

It's a "visual language action" VLA model "built on the foundations of Gemini 2.0".

As Gemini 2.0 has native language, audio and video support, I suspect it has been adapted to include native "action" data too, perhaps only on output fine-tuning rather than input/output at training stage (given its Gemini 2.0 foundation).

Natively multimodal LLM's are basically brains.

replies(2): >>44371303 #>>44372072 #
1. quantumHazer ◴[] No.44372072[source]
> Natively multimodal LLM's are basically brains.

Absolutely not.

replies(1): >>44374702 #
2. KoolKat23 ◴[] No.44374702[source]
Lol keep telling yourself that. It's not a human brain nor is it necessarily a very intelligent brain, but it is a brain nonetheless.
replies(1): >>44380941 #
3. quantumHazer ◴[] No.44380941[source]
Not a useful commentary. ANN and BNN are slightly correlated. That fact that you want to believe it is a brain tells a lot about you, but it doesn’t make a model a brain.

Only suggestion I have is “study more”.

replies(1): >>44392117 #
4. KoolKat23 ◴[] No.44392117{3}[source]
They're not merely slightly correlated.

If it looks like a duck and quacks like a duck...

Just because it is alien to you, does not mean it is not a brain, please go look up the definition of the word.

And my comment is useful, a VLA implies it is processing it's input and output natively, something a brain does hence my comment.