←back to thread

566 points PaulHoule | 4 comments | | HN request time: 1.101s | source
Show context
armcat ◴[] No.44494518[source]
I've been looking at the code on their chat playground, https://chat.inceptionlabs.ai/, and they have a helper function `const convertOpenAIMessages = (convo) => { ... }`, which also contains `models: ['gpt-3.5-turbo']`. I also see in API response: `"openai": true`. Is it actually using OpenAI, or is it actually calling its dLLM? Does anyone know?

Also: you can turn on "Diffusion Effect" in the top-right corner, but this just seems to be an "animation gimmick" right?

replies(1): >>44494556 #
1. Alifatisk ◴[] No.44494556[source]
The speed of the response is waaay to quick for using OpenAi as backend, it's almost instant!
replies(1): >>44494689 #
2. armcat ◴[] No.44494689[source]
I've been asking bespoke questions and the timing is >2 seconds, and slower than what I get for the same questions to ChatGPT (using gpt-4.1-mini). I am looking at their call stack and what I see: "verifyOpenAIConnection()", "generateOpenAIChatCompletion()", "getOpenAIModels()", etc. Maybe it's just so it's compatible with OpenAI API?
replies(1): >>44495069 #
3. martinald ◴[] No.44495069[source]
Check the bottom, I think it's just some off the shelf chat UI that uses OpenAI compatible API behind the scenes.
replies(1): >>44495377 #
4. armcat ◴[] No.44495377{3}[source]
Ah got it, it looks like it's a whole bunch of things so it can also interface with ollama, and other APIs.