Node + fastapi
Hey guys, I really struggle with setting up back container (fastapi, uvicorn, dockerfile) on railway. Basically it only works when container is fully rebuilt and deployed again. Looks like there is no entry point/cmd detected on reload/push to component.
Is there anything I am missing ?
(Whole project work perfectly fine on my local machine)
Also the deploy logs don’t appear at all. Build logs are always finished, without error (last message is pushed $commit_name) , but after like 10 min mark everything crashes.
Dockerfile
FROM python:3.9-bullseye
WORKDIR /app
RUN apt-get update && apt-get install -y libgl1-mesa-glx libglib2.0-0 libsm6 libxext6 libxrender-dev tesseract-ocr libtesseract-dev freeglut3-dev tesseract-ocr-all wget
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
EXPOSE 80
ENTRYPOINT [ "/bin/bash", "-c","cd src; uvicorn main:app --host 0.0.0.0 --port 80"]
Solution:Jump to solution
You should not be binding to a port. The docket file gets a new layer added to it with all your service environment variables and a PORT environment variable. That what it should be listening on for uvicorn.
24 Replies
Project ID:
N/A
N/A
try that
Thanks will be back in a min
Also wanted to mention that experience with node (nextjs) was exceptional, launched via nixpacks in a literal minute. Its that complex fastapi apps which are very hard to get right on unknown hardware without terminal access
you can run docker locally
yeah i was thinking about it for this whole day, i was hoping that my small adjustments would make it work but i dont think so now
well whats the verdict with the new dockefile
pushing rn
why tf is it 5gb???
idk it shouldnt be but my guess is torch + tesseract + detectron models
i will actually try to shrink down container locally
btw
full logs please
Container failed to start
=========================
/router.Router/StartDeployment UNKNOWN: Error response from daemon: No such container: cca0ba7790eb46eada48a65a7bbd884ffe666b3366c513fe4837b525f9e7ee78 thats the actual error, eberything before was alr i swear
i guess it just didnt provision cause of size of it
yeah you need to work on cutting the size down significantly
thank you i will be back when (if) i manage to do so
probably tomorrow since its already a night for me
sounds good
good luck!
Thanks !
Hi again, so I validated that my container is working as expected on local, it’s size is 2.5gb, but on railway it builds it into 3.7 for whatever reason. Same problem as before I guess it just don’t publish it
last logs before it gets "active" and crashes after a while
Yo just changed torch version and it became 1 gig
@Brody i think i got the container to work. Now its a networking issue. I added my port to variables (80), exposed it in dockerfile and so on, its just that that my api is still inaccessible (from private or public). I tried http and https(to be sure) but it only gives application failed to respond page. My api should work via
http://myurl{railway stuff}:80/api/decks
but there is nothing there on any type of method request.
(I have cors on my fastapi with any origin allowed)Solution
You should not be binding to a port. The docket file gets a new layer added to it with all your service environment variables and a PORT environment variable. That what it should be listening on for uvicorn.
Also, refrain from tagging conductors/team members directly #🛂|readme #5
Oh ok thanks
Anyway I think I will just rewrite everything in js
It’s so much simpler
If you are using fastapi, you can also check out the template that is working on railway https://railway.app/template/-NvLj4
This will also copy the repo to your GitHub account so you can modify and redeploy as necessary
Or you can just check out how to get things configured