Need suggestions for better infra CICD
alright, so for this project i use docker containers for everything. my compose file has a redis cache for session management, python container hosting a flask api, frontend served inside an nginx reverse proxy as static pages (with a reverse proxy on the /api route to the flask backend, as well as a reverse proxy for a pgadmin subdomain). RN, on PR acceptance to main, i build all the containers in a gh action, publish them to docker hub in a private repo, then on the fly, convert the docker compose file into a cloud formation template, generate our env variables, create an ecs context, and re-up
46 Replies
here is the justfile script that gh uses to publish
with the GH action
This is pretty dope btw
which part?
how hacky it is lmao
Ive def seen worse lol
But yeah gimme a sec
ok nice i can type normally now
so the hackiest bit is prolly the publish to dockerhub + convert to CF
Correct
That's what I'd like to clean up
if you use fargate + ecs, it can most likely handle most of that, but youll be switching your CF config for a task definition
Specifically cf part. Pushing to the reg I'm not too concerned about but still love to clean up
that eccs can then use to spawn your containers/clusters
also nice parallel w/ last chat CF basically AWS' equivalent to TF
and ecs is the container orchestrator piece
same w/ eks if you ever wanna hate your life
So would I have to manually define this separately from my compose file
Lol
yep, we had an internal lib that would just update it after every change then on push the git action would just read it, push it to ecr then deploy a blue/green ecs cluster based on it
Then with that, what would the new process of triggering an update be
but yeah its a pain in the balls
unless u have a bit of tooling around it
but yeah, what are the current pain points? we can see whether or not we can optimize those
Mainly downtime, and how long the whole thing takes. The gh action takes ~4 min, when I guess isn't terrible. Time that prod is down is probably ~20 min
Or more
But moving everything over to ecs and cf template may solve part of that I'm suspecting
cool thing with git actions is that you can check which steps take the longest and use that as a hint to optimize
alright, so far its going well. I have a CF template in progress. My first unclear question is where should my env variables come from. Should we be setting them as paramaters in the AWS cft, pass them via some sort of aws cli command when we re-up, or some other way that im unaware of
holy hell your fast
ecs/fargate can manage secrets
hmmm, the way i did it was to add them to secrets manager
and refer to them thru iac/the cli
okay, do you have an example of how that would look in the cf file / am i on the right track with this
that way i dont have to reference them in the CLI and can keep them stored in aws
👀 looks alright to me
huge
making big progress
alrighty
new question
I need to reference a secret in the healthcheck for another container
oof
interesting
oh wait
nvm
we are good, they are in the same container
could prolly have an iam policy in that other container + get the secret
<nice
and yeah in ecs most of the time you could have sidecar containers
so i think i can just have it pull it via shell
they might get injected by the task def/ecs
Using Secrets Manager - Amazon Elastic Container Service
When you inject a secret as an environment variable, you can specify the full contents of a secret, a specific JSON key within a secret, or a specific version of a secret to inject. This helps you control the sensitive data exposed to your container. For more information about secret versioning, see
or this https://docs.aws.amazon.com/AmazonECS/latest/developerguide/secrets-app-secrets-manager.html
Using Secrets Manager - Amazon Elastic Container Service
Use Secrets Manager to protect sensitive data and rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
Okay yeah that's what I was looking at
I think I'm doing it right then
great find then, and yeah
usually the docs are pretty solid
Oh gosh
I just realized
I'm gonna have to do a full database migration
👁️
From current prod to this new one
Welp, I guess I needed figure it out sooner or later
ah, yeah this might be slightly annoying, but if its in ecs, you might be able to blue green
or just manually tie containers to an rds cluster
I actually don't think it will be that bad, hell I might be able to do it from the pgadmin portal I have on prod currently
I have prod tied to a persistent volume currently, but the way I'm doing it now is very black boxy
👁️
stay tuned lol
its gonna take me a bit to finish translating over my docker compose to cft
although so far its pretty 1:1 on paramaters
oh nice
yeah wasnt sure how that one was gonna go, cause we use pulumi
vs docker compose
docker compose 🤝
indeed
alrighty
next question; where should i handle SSL. inside the ALB, or my docker image w/ nginx
im guessing the ALB, in which case ill need to do some sort of redirect on the ALB level from 443 external to the docker image's :80
well, i think i figured it out. gonna try and deploy it tomorrow with the client and see how it goes.
Up to you, i actually like doing it at the alb level
But you can also add it to your docker image as an nginx proxy
this is what im trying first
Iirc you can set the alb to an ecs target group
Creating an Application Load Balancer - Amazon ECS
This section walks you through the process of creating an Application Load Balancer in the AWS Management Console. For information about how to create an Application Load Balancer using the AWS CLI, see Tutorial: Create an Application Load Balancer using the AWS CLI
And thatll come with an alb healthcheck n shit, but would basically allow you to move to blue green deployments by just creating 2 target groups and shifting traffic from the LB