Celery worker is not getting connected with Redis broker.
code in settings.py
CELERY_BACKEND_URL = getenv('REDIS_URL')
CELERY_BROKER_URL = getenv('REDIS_URL')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
celery.py (as sibling as settings.py)
import os
from celery import Celery
from django.conf import settings
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'core.settings')
app = Celery('core')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
@app.task(bind=True, ignore_result=True)
def debug_task(self):
print(f'Request: {self.request!r}')
init.py (as sibling as settings.py)
from .celery import app as celery_app
__all__ = ('celery_app',)
115 Replies
Project ID:
N/A
Project ID: 286dccfe-bf85-48fe-ae80-ab24c5a80716
Project ID: 286dccfe-bf85-48fe-ae80-ab24c5a80716
are you using the private network?
I think yes
right but are you actually utilizing it
How do I do that??
I have set engvironment variable like this
REDIS_URL=${{Redis.REDIS_URL}}
disable private networking, and then redeploy
do I need to re-enable it??
no
I disabled it. But, It didn;t work
its the same as before
can you connect to redis with redis-cli?
yes I can connnect
can you show me how you have disabled private networking?
like this
can you show me a screenshot of your service variables
sure
I copy pasted the redis url from the redis connection section
Initially I had added like this ${{Redis.REDIS_URL}}
this is what it should be, please put it back to that
but It was n;t working then
I can try again
similar errors
Here is my docker file
FROM python:3.10
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY . /app/
RUN apt-get update && apt-get install -y \
build-essential \
libpoppler-cpp-dev \
pkg-config \
python3-dev \
supervisor \
&& apt-get clean
RUN pip install --no-cache-dir -r requirements.txt
RUN useradd -ms /bin/bash myuser
RUN chown -R myuser:myuser /app/
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
USER myuser
CMD python manage.py collectstatic --noinput && python manage.py migrate && supervisord -c /etc/supervisor/conf.d/supervisord.conf
here is the supervisord.conf
[supervisord]
nodaemon=true
[program:celery]
command=celery -A core worker --loglevel=info
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
[program:gunicorn]
command=gunicorn core.wsgi:application
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
do you think I have made any mistake in these files??okay my recommendation going forward would be to use nixpacks and use two railway services, one for gunicorn and one for celery
so get rid of the dockerfile, add another railway service to your project, In the service settings set the start command to the start command needed for celery, and in the current service have a Procfile with the gunicorn start command
so much simpler without a dependency on supervisord
OK
let me try that
I could break it down into discrete steps at a high level if that would be slightly more helpful? but keep in mind I haven't done this exact thing, so I only have a high level understanding on the general steps you need to take, like the ones I've previously given
Let me try If I fail I will ask you questions here. Is that okay?
for sure!
might be worth mentioning that both of these services should be setup to deploy from the same repo and same branch
I need to run
apt-get update && apt-get install -y \
build-essential \
libpoppler-cpp-dev \
pkg-config \
python3-dev \
&& apt-get clean
before I install packages from run requirements.txt. How can I achie3ve that
here is proc file:
web: python manage.py collectstatic --noinput && python manage.py migrate && gunicorn core.wsgi:application
worker: celery -A core worker --loglevel=info
are you building with a dockerfile?
no procfile
like you had suggested
gotcha
railway does not support a worker process in the procfile
do you have two railway services in your project?
yes
and do you have custom start commands set in each service?
no
deployment failed in both
as some of my packages are dependent on
apt-get update && apt-get install -y \
build-essential \
libpoppler-cpp-dev \
pkg-config \
python3-dev \
&& apt-get clean
this
are both of your services deploying from the same repo
I dont have ustom start command in either service
you said that already lol
Yes
do both of these services have the same service variables
It wasn't same. I made them same. And redeploying.
but its good that they are the same now
They are the exact same now
remove the worker line in your procfile
in the service that you want to have run celery set a custom start command in the service settings
in the service that you want to run django, dont set a start command, the web command from the procfile will be used as the start command
remove the worker line from procfiel
deploying right now
show me your new procfile, and show me what start command you set in the celery service please
I haven;t added the start command
I have remove the worker command from procfiel
What should be the start command?
the command you use to start celery
celery -A core worker --loglevel=info
this??is that the command you use to start celery?
Yes
Yes
then yes
show me what start command you set in the celery service?
for django app
for celery
looks good to me so far
now to install those apt packages
add a nixpacks.toml file to your project with this in it
okay
does it look okay?
yes
Okay deploying
the build failed
remove the build command from the django service
this is the celery service, it did not have the command but still failed
removed
lets focus on one thing at a time, if the django service still fails to build, send me the logs
okay
it failed
are you using pdftotext?
Yes
for this package I need the apt packages
well as silly as it may sound, i think we should go back to your single service and dockerfile, nixpacks isnt properly installing those apt packages
Okay.
doing it right away
sorry to ask you to do all that and then just ask you to go back to what you had
No thank you very much for helping out a noobie like me
here is the dockefile and supervisord.conf
SHould I deploy it??
your cmd command should only start supervisord, supervisord will then take care of starting celery and django
Yes
so please fix that
How is it now??
looks good
si it better that the last one??
you want to run all the gunicorn related commands in the same program
what you had before this was good
okay
deploying
deployment done but can't access the app
where there any errors?
unable to download logs from
Observability using the bookmark file
download the logs from the service
it also doesnt look like supervisord supports && in the command, so..
use the same supervisord.conf and dockerfile you showed here
if i dont sound like i know what im doing, its because i dont
here are the files I just deployed
after asking chatgpt
wrapping it in a shell string should certainly work
Yes it worked
I can see the admin page
any errors?
but I dont see the css applied to admin site
why you deploy to railway do you have debug set to false?
debug is false in the production in railway
I haven;t set any env variables in the railway for debug
then other than that there would be a config issue somewhere
OKay, that's not the main issue. When I call the api end point I get a 202 response as expected. But I'm getting this error when the background task is run.
that's more so looking like a code issue
I do have a thought though, how big are these audio files?
100kb or less
okay then yeah this is a code issue
gpt suggested this
I don't know what that would do but it might be worth a try
now it works as expected. I can't thank you enough for your help and patience
did you figure out the missing css issue?
no
it's a common issue, maybe gpt can suggest a fix for that too?
I guess
so I will try it
Thanks a lot
let me know how that goes
sure