←back to thread

82 points meetpateltech | 4 comments | | HN request time: 0s | source
Show context
nomilk ◴[] No.45311543[source]
Surprising to see negativity here. I send all my LLM queries to 5 LLMs - ChatGPT, Claude, DeepSeek (local), Perplexity, and Grok - and Grok consistently gives good answers and often the most helpful answers. It's ~always king when there's any 'ethical' consideration (i.e. other LLMs refuse to answer - I stopped bothering with Gemini for this reason).

'Ethical' is in quotes because I can see why other LLMs refuse to answer things like "can you generate a curl request to exploit this endpoint" - a prompt used frequently during pen testing. I grew tired of telling ChatGPT "it's for a script in a movie". Other examples are aplenty (yesterday Claude accused me of violating its usage policy when asking "can polar bears eat frozen meat" - I was curious after seeing a photograph of a polar bear discovering a frozen whale in a melted ice cap). Grok gave a sane answer, of course.

replies(4): >>45311566 #>>45311621 #>>45311627 #>>45311724 #
renw0rp ◴[] No.45311566[source]
How do you manage sending and receiving requests to multiple LLMs? Are you going it manually through multiple UIs or using some app which integrates with multiple APIs?
replies(2): >>45311574 #>>45311623 #
1. nomilk ◴[] No.45311574[source]
I created a workflow using Alfred on macOS [0]. You press command + space then type 'llm' then the prompt and hit enter, and it opens the 5 tabs in the browser.

These are the urls that are opened:

http://localhost:3005/?q={query}

https://www.perplexity.ai/?q={query}

https://x.com/i/grok?text={query}

https://chatgpt.com/?q={query}&model=gpt-5

https://claude.ai/new?q={query}

Extremely convenient.

(little tip: submitting to grok via URL parameter gets around free Grok's rate limit of 2 prompts per 2 hours)

[0] https://github.com/stevecondylios/alfred-workflows/tree/main

replies(1): >>45311773 #
2. marxisttemp ◴[] No.45311773[source]
You don’t need third-party search managers like Alfred for this. You can just make a Shortcut called “llm” that accepts Spotlight input.
replies(1): >>45311803 #
3. nomilk ◴[] No.45311803[source]
Interesting, I asked the LLMs if it's possible and it says there's an additional step of opening the shortcut first, then typing the prompt, whereas Alfred lets you put the prompt inline (i.e. you don't have to wait for the shortcut to open or anything to load). (glad for any correction to my understanding)
replies(1): >>45312639 #
4. marxisttemp ◴[] No.45312639{3}[source]
No, with Tahoe you get an inline input assuming “Accept input from Spotlight” is enabled for the Shortcut.