←back to thread

347 points LorenDB | 1 comments | | HN request time: 0.203s | source
Show context
oezi ◴[] No.44003925[source]
I wish multimodal would imply text, image and audio (+potentially video). If a model supports only image generation or image analysis, vision model seems the more appropriate term.

We should aim to distinguish multimodal modals such as Qwen2.5-Omni from Qwen2.5-VL.

In this sense: Ollama's new engine adds vision support.

replies(2): >>44006219 #>>44007313 #
prettyblocks ◴[] No.44007313[source]
I'm very interested in working with video inputs, is it possible to do that with Qwen2.5-Omni and Ollama?
replies(3): >>44008675 #>>44009579 #>>44011015 #
machinelearning ◴[] No.44009579[source]
What's a use case are you interested in re: video?
replies(1): >>44011938 #
1. prettyblocks ◴[] No.44011938[source]
I'm curious how effective these models would be at recognizing if the input video was ai generated or heavily manipulated. Also various things around face/object segmentation.