Most active commenters
  • nomilk(3)
  • raincole(3)

←back to thread

82 points meetpateltech | 17 comments | | HN request time: 1.689s | source | bottom
1. nomilk ◴[] No.45311543[source]
Surprising to see negativity here. I send all my LLM queries to 5 LLMs - ChatGPT, Claude, DeepSeek (local), Perplexity, and Grok - and Grok consistently gives good answers and often the most helpful answers. It's ~always king when there's any 'ethical' consideration (i.e. other LLMs refuse to answer - I stopped bothering with Gemini for this reason).

'Ethical' is in quotes because I can see why other LLMs refuse to answer things like "can you generate a curl request to exploit this endpoint" - a prompt used frequently during pen testing. I grew tired of telling ChatGPT "it's for a script in a movie". Other examples are aplenty (yesterday Claude accused me of violating its usage policy when asking "can polar bears eat frozen meat" - I was curious after seeing a photograph of a polar bear discovering a frozen whale in a melted ice cap). Grok gave a sane answer, of course.

replies(4): >>45311566 #>>45311621 #>>45311627 #>>45311724 #
2. renw0rp ◴[] No.45311566[source]
How do you manage sending and receiving requests to multiple LLMs? Are you going it manually through multiple UIs or using some app which integrates with multiple APIs?
replies(2): >>45311574 #>>45311623 #
3. nomilk ◴[] No.45311574[source]
I created a workflow using Alfred on macOS [0]. You press command + space then type 'llm' then the prompt and hit enter, and it opens the 5 tabs in the browser.

These are the urls that are opened:

http://localhost:3005/?q={query}

https://www.perplexity.ai/?q={query}

https://x.com/i/grok?text={query}

https://chatgpt.com/?q={query}&model=gpt-5

https://claude.ai/new?q={query}

Extremely convenient.

(little tip: submitting to grok via URL parameter gets around free Grok's rate limit of 2 prompts per 2 hours)

[0] https://github.com/stevecondylios/alfred-workflows/tree/main

replies(1): >>45311773 #
4. devjab ◴[] No.45311621[source]
I've found the results shift quite a lot between models and updates. Deepseek is pretty consistently good at writing code that is rather easy to improve from mid to good quality. Claude used to be pretty good, but now writes 10x the code you'd need. Gemini is amazing, if you buy one of the more expensive tiers, which in turn isn't really worth it because there are so many other options. GPT and Grok are hit and miss. They deliver great code or they deliver horrible code. GPT and Claude have become such a hurdle I've had to turn github co-pilot off in my VScode. Basically I use deepseek for brainstorming and GPT for writing configs, queries, sql and so on. If either of them fails me I'll branch out, and Grok will be on that list. When I once in a while face a real issue where I'm unsure about the engineering aspects, I'll use one of my sparse free gemini pro queries. I'd argue that we should pay for it at my work, but since it's Google that will never happen.

From an ethical perspective, and I'm based in Denmark mind you, they are all equally horrible in my opinion. I can see why anyone in the anglo-saxon world would be opposed to Elon's, but from my perspective he's just another oligarch. The only thing which sets him appart from other tech oligarchs is that he's foolish enough to voice the opinion publicly. If you're based in the US or in any form of Government position then I can see why DeepSeek is problematic, but at least China hasn't threatened taking Greenland by force. Also, where I work, China has produced basically all of our hardware with possible hardware back-doors in around 70% of our IOT devices.

I will give a shoutout to French Mistral, but the truth is that it's just not as good as it's competition.

5. Saline9515 ◴[] No.45311623[source]
You can do it directly using Openrouter.
6. franze ◴[] No.45311627[source]
Really, you are "surprised" to see the negativity here?
replies(1): >>45311737 #
7. raincole ◴[] No.45311724[source]
I believe, despite all the hate it got today, we'll one day be grateful that there is at least one big AI provider chooses a route with less lobotomy.
replies(1): >>45311961 #
8. andriesm ◴[] No.45311737[source]
Yes many of us are surprised at negativity at Grok.

Grok is a top contender for me.

I also use 5 LLMs in parallel everyday, but my default stack is Grok, DeepSeek, Gemini 2.5 pro, ChatGPT, Claude - same as OP but I most often switch out Perplexity for Gemini. (DeepSeek with search has become my perplexity replacement usually)

Most of my questions don't hit topics prone to trigger safety blocks, in this case I find gemini surprisingly strong, but for difficult things Grok often wins.

Gemini and Grok and Claude benefit a lot whenever they supplement their knowledge with on demand searches rather than just quick reasoning. Ask a deep insight question on Gemini Pro without making it research and you will discover the hallucinations, logical conclusions that contradict actual known facts etc. Same with Grok. Claude Code CLI when going in circles, remind it to google for more information to break it out.

Grok one shotted a replacement algorithm of several hundred lines of code to replace a part of an operational transform library that had a bug for the last 5 revisions. It passed all my tests. Base grok 4 Model wasn't even optimised for code at that time. Color me impressed!

replies(1): >>45311787 #
9. marxisttemp ◴[] No.45311773{3}[source]
You don’t need third-party search managers like Alfred for this. You can just make a Shortcut called “llm” that accepts Spotlight input.
replies(1): >>45311803 #
10. raincole ◴[] No.45311787{3}[source]
It's just anti-Musk. And anti-big-US-tech to a lesser degree.

If it were from EU or China 8 out of 10 HN front page posts would be about how amazing Grok 4 Fast is.

replies(1): >>45312241 #
11. nomilk ◴[] No.45311803{4}[source]
Interesting, I asked the LLMs if it's possible and it says there's an additional step of opening the shortcut first, then typing the prompt, whereas Alfred lets you put the prompt inline (i.e. you don't have to wait for the shortcut to open or anything to load). (glad for any correction to my understanding)
replies(1): >>45312639 #
12. joshstrange ◴[] No.45311961[source]
> less lobotomy

Aka, trained to parrot whatever Musk believes.

And no, I don’t think we will be grateful.

replies(1): >>45312003 #
13. raincole ◴[] No.45312003{3}[source]
Except it doesn't happen. Musk kept saying he's going to 'fix' the 'liberal bias' but Grok remains balance opinions mostly. He said that for meme value.

Try it yourself:

"Have Democrats or Republicans committed more political violence?"

Ask this to Grok 4 Fast, Gemini Pro 2.5, Claude Sonnet 4, and GPT 5 Chat, with internet search and reasoning disabled. I think their answers are quite similar, with Grok 4 being slightly better.

14. FirmwareBurner ◴[] No.45312241{4}[source]
People can't separate the art from the artist.
replies(1): >>45312751 #
15. marxisttemp ◴[] No.45312639{5}[source]
No, with Tahoe you get an inline input assuming “Accept input from Spotlight” is enabled for the Shortcut.
16. franze ◴[] No.45312751{5}[source]
If you don't want to support the artist, don't buy the art.

And every kind of use of a technology service is already a buy-in.

replies(1): >>45313642 #
17. FirmwareBurner ◴[] No.45313642{6}[source]
You can admire art without buying it you know.