Unable to call Streaming Response from FastAPI in production

My StreamingResponse with FastAPI using Hypercorn works in development but not during production on Railway. The deploy logs show a Prisma debug but stops mid way through the function with no error. On the frontend it Errors with 504 because it just Timesout. Is there anything unique I should be aware of with Streaming Responses on Railway?
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
My API route I am calling is attached
Solution:
I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!
Jump to solution
26 Replies
Percy
Percy3mo ago
Project ID: 272293fe-814d-4a92-9d85-82c242f56daa
Brody
Brody3mo ago
this is just SSE right?
Simon  📐🛠
Simon 📐🛠3mo ago
Yes its via an API call from a next.js server
Brody
Brody3mo ago
no issues with SSE on railway - https://utilities.up.railway.app/sse are you sending SSEs to a client's browser or? need a little more context here
Simon  📐🛠
Simon 📐🛠3mo ago
Yes, sorry, I am sending it to a clients browser. They make an API call from the next.js backend to Railway for this 'gen_query'.
Brody
Brody3mo ago
where does fastapi come into play with next and a clients browser
Simon  📐🛠
Simon 📐🛠3mo ago
A call from next/api is sent to fastAPI via:
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})
the whole route.ts is as follows:
import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
// const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})

return new StreamingTextResponse(fetchResponse.body!);
import { NextResponse, NextRequest } from 'next/server'
import { OpenAIStream, StreamingTextResponse } from 'ai'
export const maxDuration = 300;
export const dynamic = 'force-dynamic'; // always run dynamically

// POST /api/
export async function POST(req: NextRequest) {

const { lotInterestAccess } = await req.json();

try {
// const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:5000' : 'https://ideally-api.up.railway.app'}/ideal/zoneinfo?lotInterestId=${lotInterestAccess.interest.id}&zoneType=${lotInterestAccess.interest.lot.zoneType}&zoneDescription=${lotInterestAccess.interest.lot.zoneDescription}`)
const fetchResponse = await fetch(`${process.env.NODE_ENV !== 'production'? 'http://127.0.0.1:8000' : 'https://ideally.up.railway.app'}/api/parcel/genquery`, {
method: 'POST',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
},
body: JSON.stringify({ "messages": [{ role: "user", interest_id: lotInterestAccess.interest.id }] })
})

return new StreamingTextResponse(fetchResponse.body!);
Brody
Brody3mo ago
for testing, cut out the nextjs app and call the public domain of the fastapi service
Simon  📐🛠
Simon 📐🛠3mo ago
Okay will do. I have tested several different ways to make API calls but it seems once it hits one error or warning it stalls and I cant call it again... I thought it was a hypercorn thing maybe
Brody
Brody3mo ago
this is no doubt a code or config issue, its just a question of where
Simon  📐🛠
Simon 📐🛠3mo ago
What is the best way of logging on Railway during API calls?
Brody
Brody3mo ago
json structured logs would be best
Simon  📐🛠
Simon 📐🛠3mo ago
okay i'll try it out. thanks! How come debugging in Deploy Logs is highlighted red with a level: "error" with really no other information besides this? I get it that this means that its printing to stderr
Brody
Brody3mo ago
are you doing json logging?
Simon  📐🛠
Simon 📐🛠3mo ago
alot of it is print(). Should I use 'structlog' or is there a preference on Railway?
Brody
Brody3mo ago
if you are just using print what other information would you expect to be printed beside your message?
Simon  📐🛠
Simon 📐🛠3mo ago
I was just confused to why it 'errored' with printing to stderr. The main problem is I am just struggling to work out how to debug this issue because all I get is a FUNCTION_INVOCATION_TIMEOUT when I make calls in production. In Development I get no errors come up and it works fine in development. What would be the best way to debug this?
Brody
Brody3mo ago
adding verbose debug logging, you are finding its hard to debug because you do not have the level of observability into your code that you need to.
Simon  📐🛠
Simon 📐🛠3mo ago
Ok so I added
import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
Which offers plenty of system info during deploy. Although the debugging log stops displaying once prisma is disconnected. After that nothing (There should be callbacks logged at this point). If I try to make any further requests no debugging is displayed at all.
Brody
Brody3mo ago
are you making sure to log unbuffered?
Simon  📐🛠
Simon 📐🛠3mo ago
How do I do that?
Brody
Brody3mo ago
you would need to reference the loggers / python docs for that
Solution
Simon  📐🛠
Simon 📐🛠3mo ago
I figured it out. When disconnecting from Prisma Query Engine it would just freeze the server. I switched from using Hypercorn to Uvicorn and now it works!
Brody
Brody3mo ago
awesome, glad to hear it
Simon  📐🛠
Simon 📐🛠3mo ago
thanks for the support
Brody
Brody3mo ago
no problem!
Want results from more Discord servers?
Add your server