Streaming response to client

I have an api hosted on Railways that uses fastapi and websockets to talk to the client. The output is a streaming response such as this:
for bot_response in return_message: await websocket.send_text(f"{bot_response}") This works fine when hosted locally, the client side receives the response as it is generated. However, when hosted on railway, the response is released all at once. I can't tell if this is just a latency issue or if there are some configurations I can change, because the response does send the individual text segments, it just waits a while and then sends them all very quickly.
23 Replies
Percy
Percy2y ago
Project ID: 751c1a51-c832-4e25-a6d3-fc447398cba7
austinm
austinmOP2y ago
751c1a51-c832-4e25-a6d3-fc447398cba7
Brody
Brody2y ago
send your requirements.txt file please
austinm
austinmOP2y ago
aiohttp==3.8.4 aiosignal==1.3.1 anyio==3.7.0 async-timeout==4.0.2 attrs==23.1.0 certifi==2023.5.7 charset-normalizer==3.1.0 click==8.1.3 colorama==0.4.6 dataclasses-json==0.5.7 dnspython==2.3.0 exceptiongroup==1.1.1 fastapi==0.96.0 filelock==3.12.0 frozenlist==1.3.3 fsspec==2023.5.0 greenlet==2.0.2 h11==0.14.0 huggingface-hub==0.15.1 idna==3.4 Jinja2==3.1.2 joblib==1.2.0 langchain==0.0.193 langchainplus-sdk==0.0.4 llama-index==0.6.21.post1 loguru==0.7.0 MarkupSafe==2.1.3 marshmallow==3.19.0 marshmallow-enum==1.5.1 mpmath==1.3.0 multidict==6.0.4 mypy-extensions==1.0.0 networkx==3.1 nltk==3.8.1 numexpr==2.8.4 numpy==1.24.3 openai==0.27.8 openapi-schema-pydantic==1.2.4 packaging==23.1 pandas==2.0.2 Pillow==9.5.0 pinecone-client==2.2.2 pydantic==1.10.9 pymongo==4.3.3 python-dateutil==2.8.2 python-multipart==0.0.6 pytz==2023.3 PyYAML==6.0 regex==2023.6.3 requests==2.31.0 scikit-learn==1.2.2 scipy==1.10.1 sentence-transformers==2.2.2 sentencepiece==0.1.99 six==1.16.0 sniffio==1.3.0 SQLAlchemy==2.0.15 starlette==0.27.0 sympy==1.12 tenacity==8.2.2 threadpoolctl==3.1.0 tiktoken==0.4.0 tokenizers==0.13.3 torch==2.0.1 torchvision==0.15.2 tqdm==4.65.0 transformers==4.29.2 typing-inspect==0.8.0 typing_extensions==4.5.0 tzdata==2023.3 urllib3==1.26.16 uvicorn==0.22.0 websockets==11.0.3 win32-setctime==1.1.0 yarl==1.9.2
Brody
Brody2y ago
did you pip freeze your entire system packages into a file?
austinm
austinmOP2y ago
no, i am hosting from github. my requirements.txt is just stored there
Brody
Brody2y ago
that's a lot of packages
austinm
austinmOP2y ago
yup
Brody
Brody2y ago
are you using all of them
austinm
austinmOP2y ago
yeah
Brody
Brody2y ago
are you sure?
austinm
austinmOP2y ago
haha yes 95% sure. Do you think streaming issue is connected to this?
Brody
Brody2y ago
I have seen this issue before, and it was solved by deleting a single unused package, but I don't see such package there
austinm
austinmOP2y ago
interesting. I cut down requirements as much as possible. Do you have reference to the past issue? also, it works when hosted locally, not sure why a package would change behavior when hosted remotely?
Brody
Brody2y ago
because railway uses uvicorn to run your app, but I doubt you use that locally
austinm
austinmOP2y ago
i did use uvicron locally
Brody
Brody2y ago
what start command do you use to start with uvicorn locally
austinm
austinmOP2y ago
uvicorn main:app --reload
Brody
Brody2y ago
and what start command have you told railway to use
austinm
austinmOP2y ago
uvicorn main:app --host 0.0.0.0 --port $PORT I just used the railways fastapi template which has that in the railway.json
Brody
Brody2y ago
try to play around with the difference websocket implementations uvicorn offers via the --ws flag https://www.uvicorn.org/#command-line-options
austinm
austinmOP2y ago
alright i will. thanks for the help and quick responses
Brody
Brody2y ago
no problem
Want results from more Discord servers?
Add your server