Webhook Issue - Getting 503 Error
I have been working with the WhatsApp Business API and set up a webhook using a Flask server to receive user message events. Everything was working fine in my local development environment. However, when I deployed my code on Railways, I started encountering a 503 error, and it seems like the requests are not reaching my server.
I have prior experience with Railways, and this is the first time I have encountered this problem.
1. I have confirmed that the webhook URL in my WhatsApp Business API settings points to the correct Railways deployment URL.
2. I have checked for any logs or error messages in the Railways dashboard, but there doesn't seem to be any relevant information.
3. I have also tested the webhook with ngrok locally, and it works without any issues.
main.py
@app.route('/catalyst', methods=['POST', 'GET'])
def process():
data = request.json
response_data = {'success': True}
# print(data)
executor.submit(async_task, data)
return jsonify(response_data), 200
if __name__ == '__main__':
from waitress import serve
serve(app)
#app.run(debug=True, port=4000)
60 Replies
Project ID:
3a78f362-9ecd-415e-9f22-d31c21f0ad0e
3a78f362-9ecd-415e-9f22-d31c21f0ad0e
you are hard defining port=4000
use the railway port env
try this @rs
Morpheus, while that likely would work, app.run starts a development server, you don't want to run a development server on railway. please guide them to using gunicorn
I tried the above solution and I am still getting 503 error.
and I use the below code when in production, since I was debugging the server I removed these lines
if __name__ == '__main__':
from waitress import serve
serve(app)
#app.run(debug=True, port=4000)
you should not comment that out, please put the code back to how you had it in your original question
do you have a Procfile?
No
Never needed it. I have deployed the same code before
add
gunicorn=21.2.0
to your requirements.txt
add a Procfile to your project with this in it
Sure, I am trying this right away, will let you know. Thanks
Still getting 503 error
deployment logs please
also, are you building with a dockerfile?
No, just a simple flask server to receive the payload
looks like your app is locking up?
does it ever pass that unzipping stage
No
what is nltk data
I guess it is the memory consumption issue, I will have a word with my team and cross check this
what do the memory metrics look like
you have 8gb and if you ran into that you would see clear indication that your app crashed
well you're on hobby so using that much memory isnt a problem, to me this looks like the unzipping process errored out and softlocked your app
does the unzipping process happen when ran locally without issue?
NLTK is downloading and installing a data package called "punkt" for common NL processing
Yes, the program is working without any issues in the local env
does the unzipping library ntlk is using require some kind of system library that may be missing on railway?
this is a log from the previous month, where I deployed the same code and it worked. Just for reference
okay thats very good info
add a railway.toml file to your project with this in it
as far as I know, NLTK does not require any system libraries for unzipping the data packages it downloads.
sure
how hard would it be for you to make a skeleton project that just uses nltk to download punkt? this would be usefull to try to debug this issue further on railway's side
Oh, I can create a simple project that performs a simple NLTK task. That would be a good idea
yes that would be much appreciated
on it
but do let me know if that railway.toml gets your app working again
Yes, the deployment is in progress
Nope, still getting 503 error
stuck on unzipping?
import nltk
nltk.download('punkt')
import nltk.data
data_dir = nltk.data.find('corpora')
print(f'NLTK data directory: {data_dir}')
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize, sent_tokenize
# Sample text
text = "NLTK is a leading platform for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet."
# Tokenize the text into words
words = word_tokenize(text)
# Tokenize the text into sentences
sentences = sent_tokenize(text)
# Print the tokenized words and sentences
print("Tokenized Words:")
print(words)
print("\nTokenized Sentences:")
print(sentences)
what do the logs of your app look like, lets come back to the example code later
Nope, it is processing after the unzipping. different two diff codes
for the app, I don't think unzipping is the issue
it gets stuck on something
If I use the DEBUG mode, the server does start but doesn't receive any requests.
logs please
you are using a development server, do you have the Procfile in your project?
Yes, I just commented the serve(app) line in the code to see if NLTK is the issue, this project have all the files mentioned above .toml, Procfile
can you share your repo?
I'm developing this for a client, and my team also has access to the repository. Sharing the entire repository is challenging for me.
okay no worries, can i just see a screenshot of the repo then please
The code I provided earlier is the code for main.py. Unfortunately, I cannot share the code for the async_task function
so dont import in your main name function
also how brody said flask with app.run works ofcourse
but flask is synchronous and not asynchronous like FASTapi
and to run over something else than the app
u can pick one from like gunicorn, uvicorn, daphne
all will work and are fine
just import one of those and init over the library down in your main instead of app
uvicorn^
Morpheus I really do appreciate your help, but you always complicate things
this is too much information to take in at once, there is likely some far simpler issue at hand, like Procfile not having a capital P
can I please have a screenshot of your github repo?
show me the build table at the top of the build logs please
do you have a start command set in your service settings?
if so, remove it
It didn't, I added it few deployments back
Will remove and try again
haha sorry its just how my brain works ðŸ˜
cant help it
KISS
Changed the library to asyncio, and I am able receive the requests now. Thank you so much for the help, appreciate it
for my sanity, can you please send the build table for your latest build please