103 Killed
I thought that the limit on container memory was 8G, yet my processes are getting killed when reaching 4G, could it be that the last data point missing from the usage chart?
10 Replies
I dont think so. This is most likely an issue with your code.
Hmm, ok, I have some idea, will investigate
well the usage chart is peaking to 7GB some of the attempts, and adding the potentionally missing lib did not change anything, so I am pretty sure I am running out of memory
but if you could check to confirm
project/76e242e9-db08-4279-a677-6c0063fdb9ce/service/e5852326-cf78-4686-b1bf-bd7534119d29
that would be awesome
Sure thing Honza!
This is my last case for today, but for you, anything π
Oh this is interesting
so you are saying it kills when it hits these memory spikes?
yes
(sorry for late reply, it was late)
@Angelo just pinging if you had a chance to look into this
No, not since I went to sleep.
However, I don't seem to have any leads here. Is this some sort of cron that is running and is injesting some data?
also noticing it didn't happen since? Its leading me to believe that its an application issue :\
as I said, I am pretty sure this is being killed because its trying to use too much RAM. I can fire of the job thats causing this at any time π but it has not ran since
Yea, but thats what is very confusing- the RAM spiking behavior shouldn't behave differently on our platform vs. your local machine
I assume it takes 8GB for you? Locally?
Good question, I have not ran this exact workload locally.