←back to thread

566 points PaulHoule | 1 comments | | HN request time: 0.203s | source
Show context
JimDabell ◴[] No.44490396[source]
Pricing:

US$0.000001 per output token ($1/M tokens)

US$0.00000025 per input token ($0.25/M tokens)

https://platform.inceptionlabs.ai/docs#models

replies(1): >>44490656 #
asaddhamani ◴[] No.44490656[source]
The pricing is a little on the higher side. Working on a performance-sensitive application, I tried Mercury and Groq (Llama 3.1 8b, Llama 4 Scout) and the performance was neck-and-neck but the pricing was way better for Groq.

But I'll be following diffusion models closely, and I hope we get some good open source ones soon. Excited about their potential.

replies(1): >>44492609 #
tripplyons ◴[] No.44492609[source]
Good to know. I didn't realize how good the pricing is on Groq!
replies(2): >>44493536 #>>44497444 #
sexeriy237 ◴[] No.44497444[source]
You're getting the savings by shifting the pollution of the datacenter onto a largely black community and choking them out.
replies(1): >>44498135 #
1. JimDabell ◴[] No.44498135[source]
Are you confusing the AI company Groq with xAI, Elon Musk’s AI company that has a model called Grok?