Not releasing the memory after Terraform task completed
Hi, I've noticed high memory usage after starting a workspace or updating the template.
Typically, it only takes around 100MB, but the memory usage spikes and persists when performing such tasks. I expect the memory usage to decrease after an interval with no further activity. However, it seems Coder does not intend to release this memory, and I often have to restart Coder to release them.
Besides, it uses ~300MB even after I have restarted Coder. Maybe it is because of the code-server or the JetBrains gateway?
For some contexts:
- My workspaces are running on a dedicated AWS EC2 instance. This Coder instance is only for managing my EC2s. I do not run any container on this.
- I am the only user in this coder instance. I only own two workspaces and one templates. The two workspaces are stopped for most time (for example, both of the workspaces are stopped at 20:20, but Coder still takes ~700MB instead of the baseline, 54MB.)
- I deploy Coder on Zeabur with the Docker image,
ghcr.io/coder/coder:latest
.
(Forwarded it from https://discord.com/channels/747933592273027093/971231372373033030/1250431665973624864)16 Replies
As my knowledge with my VM. Systemctl showing me running 2 workspaces for 500mb only, with 2 peer-to-peer connections.
@Pan can you check on which processes are using that memory ?
A bit hard. Are there any debug console in Coder?
are you running Coder within a container ?
Yeah
do you have access to the Docker CLI ?
there isn't really AFAIK, only the healthcheck page which won't have the details we want
Nah.
It is started as a Kubernetes Pod
that I do not have access to check more details on
oh sorry I assumed Docker
yeah I don't really know if we can debug this further without access to the Coder container's CLI (e.g using
kubectl exec
) to run some tools like htop
unless maybe k8s has an API to expose processes in a Pod, i don't knowHowever, there is a detail: it persists the memory even though I have restarted the Pod.
oh alright
then it's probably some kind of in-memory cache used for provisioning
I think you should open an issue on GitHub over @ coder/coder
GitHub
GitHub - coder/coder: Provision remote development environments via...
Provision remote development environments via Terraform - coder/coder
Okay. Any details I should add for an issue?
the Coder team is more active there and will likely be able to answer your questions and help diagnose if this is normal or not
all the details you mentioned in your first post and the fact it persists on restart
Coder version too
Thank you!
I will fill an issue asap. Thank a lot for your diagnosis!
please send the issue link here when you open one :-)
GitHub
Coder does not release the memory after the Terraform task complete...
Description I deploy Coder on Zeabur with the Docker image, ghcr.io/coder/coder:latest. I have noticed high memory usage after starting a workspace or updating the template. Typically, it only take...