that's exactly right, but actually just trying it again, I can't replicate this. I've tried a bunch

that's exactly right, but actually just trying it again, I can't replicate this. I've tried a bunch of different things and FINALLY narrowed it down. The problem is in the messages array, when passing an array to content instead of a string, it doesn't work. Here's my code as an example: Doesn't work:
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
response_format: { type: "json_object" },
messages: [
{
role: "user",
content: [
{
type: "text",
text: query
},
{
type: "image_url",
image_url: {
url: image
}
}
]
}
],
});
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
response_format: { type: "json_object" },
messages: [
{
role: "user",
content: [
{
type: "text",
text: query
},
{
type: "image_url",
image_url: {
url: image
}
}
]
}
],
});
Works:
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
response_format: { type: "json_object" },
messages: [
{
role: "user",
content: query
}
],
});
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
response_format: { type: "json_object" },
messages: [
{
role: "user",
content: query
}
],
});
The query variable is a string in both cases So I guess Cloudflare would just need to add support for an array in this field
15 Replies
Kathy
Kathy•4mo ago
ok so it's not actually a gateway vs gateway problem. it's a if the messages is an array instead of a string problem
Bash
BashOP•4mo ago
yeahhhh I was so confused for a long time. Sometimes it's easy to miss small things like that 😅
Kathy
Kathy•4mo ago
thanks for surfacing!
ben_makes_stuff
ben_makes_stuff•4mo ago
Is there currently a way to tell which model executed when using the universal endpoint? Some kind of response header I can retrieve? Use case: I log all requests in my database and I want to record the provider and the model used for each request. Don't want to have to call the log endpoint to retrieve logs before writing the entry to my DB
Kathy
Kathy•4mo ago
hey there - exactly as you mentioned, today this info is exposed via API. Can you elaborate on your use case so we can understand why the api method doesn't work for you?
ben_makes_stuff
ben_makes_stuff•4mo ago
interesting, maybe I just missed a response prop - I thought it was only displaying the final output without saying which model executed. Will take another look actually, are there docs for this anywhere? I checked the official REST api spec but it seems to only have list, search, etc endpoints. The universal endpoint response isn't documented from what I can see, so I can't tell if it will work or not
Oblomki
Oblomki•4mo ago
@ben_makes_stuff @Kathy | AI Gateway PM I had (have) the same question. Unless something has changed in the few days since I last looked, I didn't notice anything to tell me which provider/model was used if using the Universal Endpoint with a fallback. Since each provider has a unique JSON structure, I need to know which provider was used in order to parse the output properly. My second question was, following on from the first, since the location of the response is different with each provider, is there no option to normalise output, i.e. have the response at (say) response.content regardless of the underlying provider?
ben_makes_stuff
ben_makes_stuff•4mo ago
Yep, exactly this. I'm going to run a fallback with OpenAI -> Groq so the part about the reponse shape doesn't matter to me (it's the same with Groq, they use the OpenAI spec for all requests), but for logging purposes I still want to know which provider was invoked so I can see how the quality of the output degrades, if at all, when using different providers
nicolasmnl
nicolasmnl•4mo ago
Hi, guys! Does AI Gateway support caching requests for audio models like whisper from OpenAi? Tried here and it didn't work so I'm not sure if it does or if I'm doing something wrong :/ Didn't find any docs talking about this too Thank you!
dave
dave•4mo ago
Hmm, I think these cost calculations might be a littttttttle off lol
No description
No description
No description
cosbgn
cosbgn•4mo ago
Does gateway supports openAI assistant endpoints?
Kathy
Kathy•4mo ago
@Oblomki @ben_makes_stuff we're working on UI updates that would show the fallback model # in our logs UI, but it sounds like you would need it in the response, not just the UI would this example work for y'all? -You've set up a fallback with OpenAI>Groq. -We pass a custom header, ex: cf-aig-universal-step -if cf-aig-universal-step = 1, you know it's OpenAI. if =2, you know it's Groq hey! thanks for pointing this out. caching currently only works for text and image responses. Updating docs with your feedback to clarify - ty
ben_makes_stuff
ben_makes_stuff•4mo ago
1. The approach of including a custom header seems fine 2. A step number would be "okay" but not ideal because I'd have to write custom logic convert from a number to a name. It's not difficult logic to write but it is an extra... step 😎 It would be great if you could return the provider name instead of a step number, if not, the step number thing technically works I'd normally expect output like "openai" or "groq" instead of step1, step2 Most ideal: provider name (i.e. groq, openai) Second most ideal: provider API base URL (i.e. api.openai.com or api.groq.com) Third most ideal: step number (1, 2)
nicolasmnl
nicolasmnl•4mo ago
Thank you Kathy! Have a nice day!
Kathy
Kathy•4mo ago
so we are discussing, but just passing provider wouldn't fit all cases as fallbacks can change three things 1) provider 2) model 3) prompt. So for example, if someone were using Workers AI llama > Workers AI mistral, just passing provider wouldn't work. but step would also, the info being passed through API i was referring to earlier is https://developers.cloudflare.com/api/operations/aig-config-list-gateway-logs which passes model and provider in the response. Curious why that wouldn't work?
Cloudflare API Documentation
Interact with Cloudflare's products and services via the Cloudflare API
Want results from more Discord servers?
Add your server