←back to thread

261 points fzliu | 1 comments | | HN request time: 0.214s | source
Show context
FergusArgyll ◴[] No.42163723[source]
I'm missing something. Shouldn't any llm that's 'natively multimodal' somehow include embeddings which are multi-modal? for ex here's googles blogpost on Gemini

  Until now, the standard approach to creating multimodal models involved 
  training separate components for different modalities and then stitching them 
  together to roughly mimic some of this functionality. These models can 
  sometimes be good at performing certain tasks, like describing images, but  
  struggle with more conceptual and complex reasoning.

  We designed Gemini to be natively multimodal, pre-trained from the start on 
  different modalities. Then we fine-tuned it with additional multimodal data to 
  further refine its effectiveness. This helps Gemini seamlessly understand and 
  reason about all kinds of inputs from the ground up, far better than existing 
  multimodal models — and its capabilities are state of the art in nearly every 
  domain.
replies(3): >>42163807 #>>42165329 #>>42167478 #
1. refulgentis ◴[] No.42167478[source]
Fwiw if the other replies aren't clear: change "embeddings" to "List<double> that some layer of my AI model produces" (that's not exactly correct, it's slightly more specific than that, but in this context it's correct)

LLMs, including multimodal LLMs, do have embeddings, but they're embeddings learned by generating text, instead of finding similar documents