Add caching to Node Dockerfile
I want to ask for feedback on my Dockerfile. This is the first time I've set up caching.
This Dockerfile works. My first deploy of this file was 143 secs, the second 124 secs. But that's still really slow IMO for that second try. Have I set this up properly for Node and PNPM as package manager?
Is there anything else I can do to speed these builds up?
I also noticed it's required to hardcode your
session-id
. I've now bound this to the service-id
of my main
production environment in Railway. What are the implication of this when a different environment is created in Railway for your project, for example when you create a new PR on Github?
Thanks!
37 Replies
Project ID:
e87cdb12-ca40-472a-b18b-fc6f19c34f3f,e87cdb12-ca40-472a-b18b-fc6f19c34f3f
what takes the most time to run?
I believe you'd shared a tool somewhere with which I could extract build logs. Would that be helpful?
Building takes ~80 secs
Here I filtered the logs for
done
:
so it's the building, exporting and pushing of layers, and pushing the manifest which take the most time.Can't get the bookmark to work with Arc browser and injecting the JS via console doesn't work either.
ctrl / cmd + k
-> logs
Does that work?
since its not the full logs, nope doesnt work
okay that one works
Sweet ππΌ
do you happen to know how big this image is?
How can I find out?
by building locally
Once I do that, where can I find the image? Does it get created in my project folder?
Sorry I'm new to Docker.
How do I build the Docker file locally using Railway's cache? Do I have to create a new Dockerfile to build locally without the caching statements?
you dont build it with railway's cache, nor is that something you can do, you simply build the image, you shouldnt need to change anything
OK I will try.
I had to make a bunch of changes to the Dockerfile to get it working locally, but got the local creation of the image done. The size of the image is 1.16GB. Here's the updated Dockerfile:
What do you think of the new Dockerfile?
And are there any ways you think I can speed up builds?
#πο½readme 5) don't ping team or conductors
1.16gb is still big but totally doable imo
Oh sorry! I'll remove the tag.
Done π
Do you think there are any further optimisations I could make?
π€ I don't think so, I never had to optimize the size of my images
so I can't really say
you can definitely get it smaller than 1.16GB.
it's also possible your mount targets are incorrect, aka not the actual path pnpm stores the cache at, so nothing actually gets cached
How can I reduce the size of the image?
And how would I know where to find the correct mount targets?
I referenced this https://github.com/railwayapp/nixpacks/blob/main/src/providers/node/mod.rs thinking this was the right example for
node
and found pnpm cache mentioned as such: const PNPM_CACHE_DIR: &str = "/root/.local/share/pnpm/store/v3"
GitHub
nixpacks/src/providers/node/mod.rs at main Β· railwayapp/nixpacks
App source + Nix packages + Docker = Image. Contribute to railwayapp/nixpacks development by creating an account on GitHub.
Not sure if I derived my mounts correctly from that.
these articles (part 1 & 2) were pretty good walkthrough on how to reduce image sizes https://lengrand.fr/reducing-dockers-image-size-while-creating-an-offline-version-of-carbon-now-sh/
mine went from ~900MB to ~350MB after optimization
Nice I incorporated some of the tips.
Does the Slim package work with Railway too? Asking since it seems CLI driven so not sure if you could incorporate that into the build file.
Julien's DevRel corner
Reducing our Carbon Docker image size further!
Second post of the series, we keep diving into more ways to reduce our Docker image size. Let's see how we manage to reach a factor 10!
yes no issues with the
-slim
node image variantsHow would I incorporate that into the creation of my image? Can I run docker-slim in the Dockerfile somehow?
Any hints or example repos I could take a look at, perhaps?
oh sorry i thought you wanted to use slim images, no railway does not support running docker slim
I thought so -- thanks for clarifying.
I was about to halve the size of the image though by pruning the node and pnpm modules in the build phase, and then explicitly copying the
node_modules
in the production step instead of re-installing the dependencies there.
π
Any feedback on this? How do I know if I'm using the right mount targets?im sure you arent the first to do this on the vast internet, and it wouldn't be specific to railway
Hint taken. I will do more research.
Also I had this remaining question:
I also noticed it's required to hardcode your session-id. I've now bound this to the session-id of my main production environment in Railway. What are the implication of this when a different environment is created in Railway for your project, for example when you create a new PR on Github?
you mean service id
and in that case, it will fail
But so far, when I've been testing these builds and put my new code in a new PR, a new Railway environment was started which deployed just fine with the Dockerfile that had my service-id included.
I don't understand the limitations of this properly. Is the idea that I hardcode the
service-id
of each Railway production environment that I'll be deploying the image in?again, service id, not session id
Sorry yes, typo.
But my question still stands.
yes you do need to hardcore that service id for every service you deploy to
I see. Understood.