help with env config in railway
hi i try to deploy mi app in railway i use
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.getenv('PGDATABASE'),
'USER': os.getenv('PGUSER'),
'PASSWORD': os.getenv('PGPASSWORD'),
'HOST': os.getenv('PGHOST'),
'PORT': os.getenv('PGPORT', '5432'),
}
}
and i create 9 Service Variables but i doesnt work
65 Replies
Full build and deploy logs please - https://bookmarklets.up.railway.app/log-downloader/
sorry Brody i can fix it but how i can configure flower
sorry, I don't know what flower is, I'll need more context
I'm asking about the Flower Python library in my Django project. It uses port 5555 and has this configuration in the Procfile that I use in my project:
web: gunicorn vicentEcommerces.wsgi:application --bind 0.0.0.0:$PORT --workers 6
worker: celery -A vicentEcommerces worker --loglevel=info --concurrency=6
flower: celery -A vicentEcommerces flower --port=5555 --broker=${REDIS_URL} --basic_auth=username:password
Sure! Here is the translated text:
---
In my project, I am using Celery and Redis, which are already configured and working. However, I want to add more workers because one is not enough. To visualize them, I was planning to use Flower, but I'm open to any recommendations you might have.
😫
ah gotcha.
you have 3 processes in that Procfile, railway does it differently than heroku.
for each process in your Procfile you will need a separate railway service.
each of these 3 services will deploy from the same GitHub repo.
each of these 3 services will have the same variables.
the only difference between these 3 railway services would be the start command you set in their service settings.
on one service you will set the web command, another service the worker command, and the last service the flower command.
all of these services including any databases you may have should all be deployed into the same railway project.
Would I then have to create a new branch for each Procfile?
nope, you dont need a procfile with that method, please read my message again
sorry i am from chile and my english its so so i wil try that but you said the only difference between these 3 railway services would be the start command you set in their service settings.
so Procfile its not needed?
feel free to translate to your language!
yes a procfile is not needed with the steps i went over
ok Brody i wil try know that
I do that but the 3 apps crashed
in worker I put in Custom Start Command worker: celery -A vicentEcommerces worker --loglevel=info --concurrency=3
in flower app I put in custom start command i put
flower: celery -A vicentEcommerces flower
you should not include the
worker:
or flower:
parts of the procfile in the start commandsits work thx know all service are running
the last thing when i try www.myurl:5555 flower doesnt show anything }
for flower the start command should be -
please note i have removed the hardcoded username as password, please define them as service variables now
usport its 5555?
it needs to listen on the auto assigned
$PORT
variable"I created the variables password, user, and port in the environment variables, but an error occurs."
have you updated the start command with the command i gave you?
yes i do that i work and the logs
Starting Container
[I 240612 01:26:46 command:168] Visit me at http://0.0.0.0:5555
[I 240612 01:26:46 command:176] Broker: redis://default:@viaduct.proxy.rlwy.net:45810//
'allMarketsPlaces.tasks.add',
[I 240612 01:26:46 mixins:228] Connected to redis://default:@viaduct.proxy.rlwy.net:45
please send the start command you currently have
celery -A vicentEcommerces flower --port=${PORT}--broker=${REDIS_URL} --basic_auth=${USERNAME}:${PASSWORD}
and it doesnt have public network
show your service variables tab
PASSWORD=Robotito12.
PORT=5555
REDIS_PRIVATE_URL=${{Redis.REDIS_PRIVATE_URL}}
REDIS_URL=${{Redis.REDIS_URL}}
USERNAME=admin
not sure why you would share your username as password, please make sure to change them
send a screenshot of the flower service public networking
im sorry but thats not at all what i asked for
please read my message again
it needs to have a domain, generate one
😦
ok i generated one and then ?
its ok and then
click the domain
ooo thanks its work finally
the last question its only have 1 worker how i can put 3 workers
you already have
--concurrency=3
in your start command for the workeryes i have in star command worker this: celery -A vicentEcommerces worker --loglevel=info --concurrency=3
then you are good
and the flower
why do you think you need 3 workers when you have
--concurrency=3
?mm whats its the diference with concurrency 3 and 3 workers
??
--concurrency=3
just runs 3 workers under the hood for youIt happens that on localhost, when I test the workers in Flower, I see 3 worker entries. I understand that, regardless of showing one worker, there are 3 as you mentioned. But how can I make all 3 worker entries visible, or do I need to create another repo with workers and name them?"
it is unnecessary, i do not see a point in running 2 more separate workers
ok Thanks Brody for all but the last thing
Aquà tienes la traducción al inglés de tu mensaje:
"So when I use --concurrency=3, the worker will be able to handle up to 3 tasks simultaneously."
correct
okokok Thank u
no problem!
Brody the task worker logs said
2024-06-12 00:59:34,920: INFO/MainProcess] Events of group {task} enabled by remote.
[2024-06-12 02:02:19,173: INFO/MainProcess] Task allMarketsPlaces.paris.crearStockProducto[720e7060-1a69-4579-8986-73ca3f42c43f] received
django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
have you made sure to set the needed postgres variables on that service?
nop i put that know and redeploy
its work srry for the time
its works but i have a [problem when i redeploy flower
the commands celery -A vicentEcommerces flower --port=${PORT}--broker=${REDIS_URL} --basic_auth=${USERNAME}:${PASSWORD}
and de logs are rv = self.invoke(ctx)
^^^^^^^
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^ return ctx.invoke(self.callback, ctx.params)
return __callback(*args, kwargs)
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func return f(get_current_context(), *args, **kwargs) File "/opt/venv/lib/python3.11/site-packages/flower/command.py", line 42, in flower
apply_options(sys.argv[0], tornado_argv) File "/opt/venv/lib/python3.11/site-packages/flower/command.py", line 86, in apply_options parse_command_line([prog_name] + argv) File "/opt/venv/lib/python3.11/site-packages/tornado/options.py", line 722, in parse_command_line return options.parse_command_line(args, final=final) option.parse(value) File "/opt/venv/lib/python3.11/site-packages/tornado/options.py", line 588, in parse
self._value = _parse(value) container event container died
^^^^^^^^^^^^^ return ctx.invoke(self.callback, ctx.params)
return __callback(*args, kwargs)
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/click/decorators.py", line 33, in new_func return f(get_current_context(), *args, **kwargs) File "/opt/venv/lib/python3.11/site-packages/flower/command.py", line 42, in flower
apply_options(sys.argv[0], tornado_argv) File "/opt/venv/lib/python3.11/site-packages/flower/command.py", line 86, in apply_options parse_command_line([prog_name] + argv) File "/opt/venv/lib/python3.11/site-packages/tornado/options.py", line 722, in parse_command_line return options.parse_command_line(args, final=final) option.parse(value) File "/opt/venv/lib/python3.11/site-packages/tornado/options.py", line 588, in parse
self._value = _parse(value) container event container died
I wouldn't worry about these logs since they are from the old container
a ok but
please use the bookmarklet to send me logs
how i do that
follow the 3 steps on the page
?
please read the page and follow the 3 steps listed there
k
please send your start command
celery -A vicentEcommerces flower --port=${PORT}--broker=${REDIS_URL} --basic_auth=${USERNAME}:${PASSWORD}
looks you you have a typo, missing space
i cant belive that
it happens
I redeploy and see the logs
thx u
no problem!