CoverGhoul
RRunPod
•Created by CoverGhoul on 2/2/2025 in #⚡|serverless
openai/v1 and open-webui
Hey Team,
Looking at your docs, and at the question "How to respond to the requests at https://api.runpod.ai/v2/<YOUR ENDPOINT ID>/openai/v1"; I've run into a weird gotcha. When I do a GET ---
it gives me an
Most applications (like open-webui) that use the openai spec expect this to be a GET (see openai docs -- https://platform.openai.com/docs/api-reference/models) and the docs imply that it is - https://github.com/runpod-workers/worker-vllm/tree/main#modifying-your-openai-codebase-to-use-your-deployed-vllm-worker. Am I missing something, how is this supposed to work? Thanks, Paul
Most applications (like open-webui) that use the openai spec expect this to be a GET (see openai docs -- https://platform.openai.com/docs/api-reference/models) and the docs imply that it is - https://github.com/runpod-workers/worker-vllm/tree/main#modifying-your-openai-codebase-to-use-your-deployed-vllm-worker. Am I missing something, how is this supposed to work? Thanks, Paul
2 replies