N
Novu10mo ago
Kedar

High number of mongo atlas connections?

Hi, We are trying to set-up self hosted novu deployment in a k8s cluster with helm chart. The deployment went fine; and even the web app loaded fine (I did not build custom image yet; any pointer on or if someone has the github action handy to do that? it would be great if you can share it). After the installation ; I noticed an unusually high number (~350+) of connections were made to Mongo Atlas after novu deployment. I am using mongodb+srv connection string to establish mongo atlas connection.
19 Replies
Kedar
KedarOP10mo ago
Not sure how did the default web image worked; based on a note on this doc page https://docs.novu.co/self-hosting-novu/kubernetes#novu-web-container-does-not-run-on-kubernetes it does not work? @Pawan Jain any thoughts on this? anything we might be missing Just found this ref: https://github.com/novuhq/novu/pull/3437 We had used default MONGO_MAX_POOL_SIZE of 50; but the number of connections went upwards of 350 @Suchit
Novu_Bot
Novu_Bot10mo ago
@Kedar, you just advanced to level 1!
Kedar
KedarOP10mo ago
It hits 500 connections within few minutes after brining up the services
Pawan Jain
Pawan Jain10mo ago
@Kedar we highly appreciate reaching out for this issue. However, Kubernetes and helm chart based self hosting deployment support is not included in our community self hosting support. In docker compose based set up, these connection can be controlled using env variable MONGO_MAX_POOL_SIZE
Kedar
KedarOP10mo ago
Thank you @Pawan Jain there is an env variable MONGO_MAX_POOL_SIZE=50 in all the containers for api, ws, worker. We tried to enable debug logging using LOGGING_LEVEL=debug; however, this too did not take effect This is on the contianer environment
MONGO_MAX_POOL_SIZE=50
MONGO_MIN_POOL_SIZE=10
MONGO_MAX_POOL_SIZE=50
MONGO_MIN_POOL_SIZE=10
LOGGING_LEVEL=debug
LOGGING_LEVEL=debug
This behaviour should be same irrespective of how the service is deployed? @Pawan Jain
Zac Clifton
Zac Clifton10mo ago
This will fail only if the k8 cluster policy enforces read-only root or read-only file system, and it normally fails the deployment. This is per service per contain. essentially each container manages its own ppol of that size not over the total system.
Kedar
KedarOP10mo ago
Got it @Zac Clifton there is only one pod for each of these services api, worker, ws. But the config does not seem to take effect. We did a exec into the pod and checked for the ENV values and they are as mentioned above in my earlier chats
Zac Clifton
Zac Clifton10mo ago
@Paweł T. Would you have any idea about this? the kubernetes piece is unimportant is this pertains to the application itself.
Kedar
KedarOP10mo ago
For some reason; even the debug logging level does not take effect despite the env variable being set in container environment. @Paweł T.
Paweł T.
Paweł T.10mo ago
@Kedar if it goes up to 500 (which is default) it means that the env variables were not picked up by code, probably not propagated
Kedar
KedarOP10mo ago
@Paweł T. interestingly, the env of container actually has these env variables We tried changing the LOGGING_LEVEL to a random string after which we get a valid error Reverting to info. Which we did not see when used debug level; that means the log level was read properly by the api service container; however, we dont see any debug log level (level 10). Same is the condition with MONGO_MAX_POOL_SIZE=50. Another observation is; that the issue of very high number of mongo connections happend with api service; ws and worker seems to using low # of connections. All services seem to consume high # connection [UPDATE] One more observation was; when we use non-production env; we have below elasti cache cluster connection issue
{"level":50,"time":1710256735022,"pid":17,"serviceName":"@novu/api","serviceVersion":"0.15.0","platform":"Docker","tenant":"OS","context":"InMemoryCluster","err":{"type":"ClusterAllFailedError","message":"Failed to refresh slots cache.","stack":"ClusterAllFailedError: Failed to refresh slots cache.\n at
{"level":50,"time":1710256735022,"pid":17,"serviceName":"@novu/api","serviceVersion":"0.15.0","platform":"Docker","tenant":"OS","context":"InMemoryCluster","err":{"type":"ClusterAllFailedError","message":"Failed to refresh slots cache.","stack":"ClusterAllFailedError: Failed to refresh slots cache.\n at
@Paweł T. @Zac Clifton @Paweł T. any thoughts on this one?
Novu_Bot
Novu_Bot10mo ago
@Kedar, you just advanced to level 2!
Zac Clifton
Zac Clifton10mo ago
Do you mean dev deployment of novu or dev env in novu
Kedar
KedarOP10mo ago
NODE_ENV @Zac Clifton the environment of the app
Zac Clifton
Zac Clifton10mo ago
Correct, I apoligies I was not clear, I meant in regards to this sentance
One more observation was; when we use non-production env; we have below elasti cache cluster connection issue
One more observation was; when we use non-production env; we have below elasti cache cluster connection issue
Kedar
KedarOP10mo ago
Yes, the elasti cache issue is when we choose NODE_ENV as dev The issue for high number of mongodb connections is there irrespective of the env we set. The log level seems to have been set to debug level (due to the fact that it did not revert to default info), however, I do not see any debug level logs Any thoughts? Could it be mongoose client issue? Would it be possible to have the default set to a little lower probably 100? mongoose has 100 as default if you dont send any config @Paweł T. there is not way to verify if what value did service pick up; thought the env variable MONGO_MAX_POOL_SIZE=50 in container environment
Zac Clifton
Zac Clifton10mo ago
@Kedar Pawel might have some more input but without seeing your setup files. I do not think I can help. I apoligize. If it is any help, we plan to release an official helm chart in the future that should assist with this.
Kedar
KedarOP10mo ago
@Zac Clifton we had missed updating to the latest release 0.23.0, the default in the helm chart was 0.15.0. After updating the release number, it's working fine. I am assuming it might have been something to do with underlying mongoose version compatibility thanks for your inputs!
Zac Clifton
Zac Clifton10mo ago
Easy fix. Glad I can help, let me know if you have any more issues with the k8 deployment
Want results from more Discord servers?
Add your server