How to resolve long GC Old intervals
Hey, currently running a 1.19.2 fabric server with ~250 mods.
One player and we have a GC Old interval of 144minutes and an average time of 1600ms.
Spark profiler incoming once it runs for longer enough.
73 Replies
Thanks for asking your question!
Make sure to provide as much helpful information as possible such as logs/what you tried and what your exact issue is
Make sure to mark solved when issue is solved!!!
/close
!close
!solved
!answered
Requested by skullians#0
@Jared | InfraCharm
Going to add the flags you sent when I get to it
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:gc.log
Add theseCurrently a little busy as am on mobile
yeah will do in a bit, sorry
PrintGCDateStamps doesn't exist apparently .-.
What version you running?
GraalVM CE Java 17
-Xlog:gc*:file=gc.log:time,tags:filecount=5,filesize=10M
That's all you need then if you're on GraalVMto replace these?
Mhm
on it
I'm assuming you'll want the gc.log?
mhm
when?
do you want me to wait until it's reached 10mb?
Sure
may be a while though
you dont have to
okay :ThumbsUp:
already 500 lines long :KEKW:
https://spark.lucko.me/wBq52aP21h spark profiler if you wish
spark
spark is a performance profiler for Minecraft clients, servers, and proxies.
Spark Profile Analysis
❌ Processing Error
The bot cannot process this Spark profile. It appears that the platform is not supported for analysis. Platform: Fabric
Requested by skullians#0
this is running chunky however I get similar GC times with no pregen
Did you stop the server before sending that?
no, that's a stopped profiler
why?
I need the gz.log
We have uploaded your file to a paste service for better readability
Paste services are more mobile friendly and easier to read than just posting a file
gc.log
Requested by skullians#0
You're on G1
no support for NUMA or LPS
java -Xms11264M -Xmx11264M --add-modules=jdk.incubator.vector -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=300 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:G1HeapWastePercent=10 -XX:G1MixedGCCountTarget=8 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1MixedGCLiveThresholdPercent=85 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=5 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -XX:G1NewSizePercent=35 -XX:G1MaxNewSizePercent=50 -XX:G1HeapRegionSize=4M -XX:G1ReservePercent=20 -Xlog:gc*:file=gc.log:time,tags:filecount=5,filesize=10M -jar server.jar --nogui
Run with these, we will start tuning your GC to be more efficient. Send the log after an hour @Skulliansis 11GB too much? The container only has 12
Am I okay to set Xms / Xmx to 10GB?
Sure
java -Xms10240M -Xmx10240M --add-modules=jdk.incubator.vector -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=300 -XX:+UnlockExperimentalVMOptions -XX:+DisableExplicitGC -XX:G1HeapWastePercent=10 -XX:G1MixedGCCountTarget=8 -XX:InitiatingHeapOccupancyPercent=30 -XX:G1MixedGCLiveThresholdPercent=85 -XX:G1RSetUpdatingPauseTimePercent=5 -XX:SurvivorRatio=32 -XX:+PerfDisableSharedMem -XX:MaxTenuringThreshold=5 -Dusing.aikars.flags=https://mcflags.emc.gs -Daikars.new.flags=true -XX:G1NewSizePercent=35 -XX:G1MaxNewSizePercent=50 -XX:G1HeapRegionSize=4M -XX:G1ReservePercent=20 -jar server.jar --nogui
(that doesn't have the gc log lol)
dw I just changed the flags before
ah okay
20 minutes more until 1 hour
server kept crashing ._.
Why crash
nothing related to RAM or GC
one of my mods causing a ConcurrentModificationException during chunk gen
server was on the verge of OOMing
Thanks i’ll take a look in a second
no rush
So this time
you are running out of memory quickly and your GC is evacuating
Try this, it will streamline everything under fewer GC threads and pause for less time, meaning more cleanup.
@Skullians
there's two instances of gc log flags in there
should I remove it?
did I paste it twice
oof
Yes remove it
okay :ThumbsUp:
want me to run it for another hour?
Yep
is geyser against the rules here?
no, why?
js wanted to make sure haha
why ask in here
xD
Hey hey hey don't trash this thread
make your own question
ikik mb
aight i made it. help this guy out and then if you dont mind can u help me too?
thanks so much though. will get back to you after an hour :)
ping me in your #questions thread.
if you need help
sure
RAM usage is already tons better.
9.86GB / 12GB, compared to 11.5.
How
java GC startup flag tuning :Shruge:
nearing 10gb now
hm
We have uploaded your file to a paste service for better readability
Paste services are more mobile friendly and easier to read than just posting a file
gc.log
Requested by skullians#0
at 10.4gb / 12gb which is much better
So now your stuff is running smooth, just your heap is slowly increasing. To allow the server to be online longer between restarts, use the above.
This is lowering your memory to 10GB and reserving a little more for GC
@Skullians
:ThumbsUp: I’ll send you another log in an hour :D
thank you so much!
Do you think that by anychance you can fix my problem too
@Jared | InfraCharm if I ever need to increase my RAM (in container, not just in flags), what should the flags look like? Should anything change or should I just change the Xms and Xmx to container max - 2gb?
If you're increasing the container's memory too, keep it the same ratio
:ThumbsUp:
So if I had a 14gb container, flags would be 12GB?
yep
got it, thanks.
We have uploaded your file to a paste service for better readability
Paste services are more mobile friendly and easier to read than just posting a file
gc.log
Requested by skullians#0
Yeah we’ll take one more look
okay :D
How many cores do you have?
4 threads (400%)
So your GC is running efficiently, we can open it up to use more cores but that will take away from your server and any mods that may be multithreaded. Up to you
I’d say keep it the same for now.
It seems to be doing well and 10GB usage is realistically decent for doing 2 pregens at the same time.
Don’t have any mods with multithreading and the server needs all the perf it can get
Sounds good - happy this is resolved 🙂
Thankyou so much again!
!solved
post closed!
The post/thread has been closed!
Requested by skullians#0