←back to thread

261 points fzliu | 1 comments | | HN request time: 0.663s | source
Show context
FergusArgyll ◴[] No.42163723[source]
I'm missing something. Shouldn't any llm that's 'natively multimodal' somehow include embeddings which are multi-modal? for ex here's googles blogpost on Gemini

  Until now, the standard approach to creating multimodal models involved 
  training separate components for different modalities and then stitching them 
  together to roughly mimic some of this functionality. These models can 
  sometimes be good at performing certain tasks, like describing images, but  
  struggle with more conceptual and complex reasoning.

  We designed Gemini to be natively multimodal, pre-trained from the start on 
  different modalities. Then we fine-tuned it with additional multimodal data to 
  further refine its effectiveness. This helps Gemini seamlessly understand and 
  reason about all kinds of inputs from the ground up, far better than existing 
  multimodal models — and its capabilities are state of the art in nearly every 
  domain.
replies(3): >>42163807 #>>42165329 #>>42167478 #
1. aabhay ◴[] No.42163807[source]
LLM embedding contain super positions of many concepts so while they might predict the next token they don’t actually out perform contrastively pretrained embedding models.