Potential memory leak
https://mclo.gs/TlXADoC
https://spark.lucko.me/C8KMfDCp3i
Server OOMs after a while of people playing on it, trying to narrow down the issue. I suspect MythicMobs and MythicDungeons but can't confirm
spark
spark is a performance profiler for Minecraft clients, servers, and proxies.
28 Replies
Spark Profile Analysis
These are not magic values. Many of these settings have real consequences on your server's mechanics. See this guide for detailed information on the functionality of each setting.
✅ Your server isn't lagging
Your server is running fine with an average TPS of 20.
Requested by .deathpacito#0 • Page 1 of 4
Thanks for asking your question!
Make sure to provide as much helpful information as possible such as logs/what you tried and what your exact issue is
Make sure to mark solved when issue is solved!!!
/close
!close
!solved
!answered
Requested by .deathpacito#0
Oh interesting the spark profiler stopped when the server OOMed
It only shows 7 min
the spark profiler you linked is… active
your GC timings aren’t great, also an alloc profiler would be nice
There's not a crash in these logs
Use Aikar flags
well if it’s an OOM there won’t be a crash log
depends on the oom!
if its a ptero etc kill there wont be in the log, but if its a actual oom there will br
Speaking of
I've noticed I never have a crash log
I just can't find any even though ptero itself says crash, and the crash is due to OOM
again ooms two different places
Though I suppose perhaps they don't make it due to the crash reason being painfully obvious
oom kills by ptero/docker is caused by allocation going over the memory limit placed on the server, if its caused by java just not having more memory to work with, it will generate errors in logs,
(allocation, not actual usage)
Ah alright
oom kills are actually by the linux kernel, not docker 🤓
https://docs.docker.com/config/containers/resource_constraints/ does this not kill?
Docker Documentation
Runtime options with Memory, CPUs, and GPUs
Specify the runtime options for a container
it's the linux kernel detecting the container is out of memory
On Linux hosts, if the kernel detects that there isn't enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory.
isnt that the host machine not containers
it applies to the container's portion of the system's kernel afaik
either way, for your actual issue @Deathpacito, when ptero shows that you're about to run out of memory, can you run
spark profiler start --timeout 60 --thread *
can also try changing -XX:MaxRAMPercentage=80
to -Xmx3g
, but they're basically the same anywaysWorth doing an allocation profiler instead. Include
--alloc
before timeoutfrom what they said, the container is running out of memory. Memory usage on the heap is likely irrelevant as it would create a crash log or print out a big java.lang.outofmemoryerror: java heap space
threads on the other hand allocate memory outside of the heap, which causes the kernel to oom kill the process
They're having GC olds
Could be from not enough memory or something allocating too much inside.
their container only has ~4gb (or 5) of memory, so that's expected
4Gb "should" be enough to not have GC olds
It's 3 players on the server, not 30.
I also don't have the original link's info, so I can't see what it looked like when it crashed
¯\_(ツ)_/¯
oh right, they don't have aikar's flags, so that's why g1 old is running
Very heavy plugins such as MMOcore, items, full Mythic suite
If my allocation is quite low I can increase it
Mythicmobs will go it iirc, send an allocation profiler and it'll prove it or not
Go it?
So spark profiler start --alloc --timeout 60 --thread * ?
i'd do
--alloc
and --thread *
seperately