←back to thread

467 points mraniki | 2 comments | | HN request time: 0s | source
Show context
breadwinner ◴[] No.43537321[source]
The loser in the AI model competition appears to be... Microsoft.

When ChatGPT was the only game in town Microsoft was seen as a leader, thanks to their wise investment in Open AI. They relied on Open AI's model and didn't develop their own. As a result Microsoft has no interesting AI products. Copilot is a flop. Bing failed to take advantage of AI, Perplexity ate their lunch.

Satya Nadella last year: “Google should have been the default winner in the world of big tech’s AI race”.

Sundar Pichai's response: “I would love to do a side-by-side comparison of Microsoft’s own models and our models any day, any time. They are using someone else's model.”

See: https://www.msn.com/en-in/money/news/sundar-pichai-vs-satya-...

replies(6): >>43537406 #>>43537626 #>>43537669 #>>43537725 #>>43537856 #>>43538794 #
maxloh ◴[] No.43537406[source]
Note that Microsoft do have their own LLM team, and their own model called Phi-4.

https://huggingface.co/microsoft/phi-4

replies(1): >>43537544 #
VladVladikoff ◴[] No.43537544[source]
Recently I was looking for a small LLM that could perform reasonably well while answering questions with low latency, for near realtime conversations running on a single RTX 3090. I settled on Microsoft’s Phi-4 model so far. However I’m not sure yet if my choice is good and open to more suggestions!
replies(1): >>43537652 #
1. mywittyname ◴[] No.43537652{3}[source]
I've been using claude running via Ollama (incept5/llama3.1-claude) and I've been happy with the results. The only annoyance I have is that it won't search the internet for information because that capability is disabled via flag.
replies(1): >>43537730 #
2. danielbln ◴[] No.43537730[source]
That's.. that's not the Claude people talk about when they say Claude. Just to be sure.