←back to thread

602 points emrah | 1 comments | | HN request time: 0.209s | source
Show context
emrah ◴[] No.43743338[source]
Available on ollama: https://ollama.com/library/gemma3
replies(2): >>43743657 #>>43743658 #
Der_Einzige ◴[] No.43743658[source]
How many times do I have to say this? Ollama, llamacpp, and many other projects are slower than vLLM/sglang. vLLM is a much superior inference engine and is fully supported by the only LLM frontends that matter (sillytavern).

The community getting obsessed with Ollama has done huge damage to the field, as it's ineffecient compared to vLLM. Many people can get far more tok/s than they think they could if only they knew the right tools.

replies(9): >>43743672 #>>43743695 #>>43743760 #>>43743819 #>>43743824 #>>43743859 #>>43743860 #>>43749101 #>>43753155 #
1. prometheon1 ◴[] No.43749101[source]
From the HN guidelines: https://news.ycombinator.com/newsguidelines.html

> Be kind. Don't be snarky.

> Please don't post shallow dismissals, especially of other people's work.

In my opinion, your comment is not in line with the guidelines. Especially the part about sillytavern being the only LLM frontend that matters. Telling the devs of any LLM frontend except sillytavern that their app doesn't matter seems exactly like a shallow dismissal of other people's work to me.