←back to thread

16 points yangikan | 1 comments | | HN request time: 0.299s | source

I am looking to do simple things like image classification/text classification using APIs without running the LLMs in my local machine. What are some APIs that provide a uniform interface to access different LLMs?
Show context
logankeenan ◴[] No.41885701[source]
I used runpod.io prior to buying a pair of 3090s. They make it easy to run vllm too so you can experiment with different models.

I also rented a GPU vm from them and ran huggingface models on it. That did require lot more coding and learning.

https://docs.runpod.io/serverless/workers/vllm/get-started

replies(1): >>41888045 #
1. Tostino ◴[] No.41888045[source]
As far as what 'API' to use, please just coalesce around the OpenAI API for your client software. You can start up an OpenAI compatible endpoint with vLLM for example. Just stick with that. You can use LiteLLM as a proxy to convert your client side requests to whatever server side format is expected for e.g. Claude.