How to deploy ML/ NLP Applications on railways?
Hey there! I wanted to ask for some advice regarding a QnA system that I've developed. The system performs semantic search, calculates the embedding of both the document and the query, calculates cosine similarity, and then retrieves the top search results. I've also created a Flask server for it. However, when I tried deploying it on Railways, it keeps restarting after sentence tokenization and doesn't reach the embedding stage. I'm curious if anyone has any idea why this is happening?
7 Replies
Project ID:
82b06707-a36b-4880-854d-849fb6f184ed
You might find these helpful:
- Using pytorch
- Has anyone ever deployed typesense to Railway?
- deploy a repository on Railway
⚠️ experimental feature
82b06707-a36b-4880-854d-849fb6f184ed
Sounds like you're on the free plan and running out of memory
Upgrade to the dev plan and you should be good
Most ML applications will use much more than 512mb which is the limit on the free plan
Noted, thank you. Any other platform I can use for deployment?
You can use Railway, just upgrade to the dev plan
Free plans on 90% of platforms are not going to allow you to host ML workflows
Okay, Noted. Thanks