High number of mongo atlas connections?
Hi,
We are trying to set-up self hosted novu deployment in a k8s cluster with helm chart. The deployment went fine; and even the web app loaded fine (I did not build custom image yet; any pointer on or if someone has the github action handy to do that? it would be great if you can share it). After the installation ; I noticed an unusually high number (~350+) of connections were made to Mongo Atlas after novu deployment. I am using mongodb+srv connection string to establish mongo atlas connection.
19 Replies
Not sure how did the default web image worked; based on a note on this doc page https://docs.novu.co/self-hosting-novu/kubernetes#novu-web-container-does-not-run-on-kubernetes it does not work?
@Pawan Jain any thoughts on this? anything we might be missing
Just found this ref: https://github.com/novuhq/novu/pull/3437 We had used default MONGO_MAX_POOL_SIZE of 50; but the number of connections went upwards of 350
@Suchit
@Kedar, you just advanced to level 1!
It hits 500 connections within few minutes after brining up the services
@Kedar
we highly appreciate reaching out for this issue.
However, Kubernetes and helm chart based self hosting deployment support is not included in our community self hosting support.
In docker compose based set up, these connection can be controlled using env variable
MONGO_MAX_POOL_SIZE
Thank you @Pawan Jain there is an env variable
MONGO_MAX_POOL_SIZE=50
in all the containers for api, ws, worker.
We tried to enable debug logging using LOGGING_LEVEL=debug; however, this too did not take effect
This is on the contianer environment
This behaviour should be same irrespective of how the service is deployed? @Pawan JainThis will fail only if the k8 cluster policy enforces read-only root or read-only file system, and it normally fails the deployment.
This is per service per contain. essentially each container manages its own ppol of that size not over the total system.
Got it @Zac Clifton there is only one pod for each of these services api, worker, ws. But the config does not seem to take effect. We did a exec into the pod and checked for the ENV values and they are as mentioned above in my earlier chats
@Paweł T. Would you have any idea about this? the kubernetes piece is unimportant is this pertains to the application itself.
For some reason; even the debug logging level does not take effect despite the env variable being set in container environment. @Paweł T.
@Kedar if it goes up to 500 (which is default) it means that the env variables were not picked up by code, probably not propagated
@Paweł T. interestingly, the env of container actually has these env variables
We tried changing the LOGGING_LEVEL to a random string after which we get a valid error Another observation is; that the issue of very high number of mongo connections happend with All services seem to consume high # connection [UPDATE]
One more observation was; when we use non-production env; we have below elasti cache cluster connection issue
@Paweł T. @Zac Clifton
@Paweł T. any thoughts on this one?
Reverting to info
. Which we did not see when used debug
level; that means the log level was read properly by the api service container; however, we dont see any debug log level (level 10). Same is the condition with MONGO_MAX_POOL_SIZE=50.
api
service; ws
and worker
seems to using low # of connections.@Kedar, you just advanced to level 2!
Do you mean dev deployment of novu or dev env in novu
NODE_ENV @Zac Clifton the environment of the app
Correct, I apoligies I was not clear, I meant in regards to this sentance
Yes, the elasti cache issue is when we choose NODE_ENV as dev
The issue for high number of mongodb connections is there irrespective of the env we set.
The log level seems to have been set to debug level (due to the fact that it did not revert to default info), however, I do not see any debug level logs
Any thoughts?
Could it be mongoose client issue? Would it be possible to have the default set to a little lower probably 100? mongoose has 100 as default if you dont send any config @Paweł T.
there is not way to verify if what value did service pick up; thought the env variable MONGO_MAX_POOL_SIZE=50 in container environment
@Kedar Pawel might have some more input but without seeing your setup files. I do not think I can help.
I apoligize.
If it is any help, we plan to release an official helm chart in the future that should assist with this.
@Zac Clifton we had missed updating to the latest release 0.23.0, the default in the helm chart was 0.15.0. After updating the release number, it's working fine. I am assuming it might have been something to do with underlying mongoose version compatibility
thanks for your inputs!
Easy fix. Glad I can help, let me know if you have any more issues with the k8 deployment