A
Admincraftโ€ข4w ago
Nikki

Anyone know what's causing this lag?

https://spark.lucko.me/sV9ZaSbzTR Seems to happen in intervals- I have no idea what's wrong with it (or why our non-heap Memory and G1 Survivor Space keeps maxing out), can anyone help? We aren't even maxing out on allocated CPU/Ram, it's super confusing and has been detrimental to most of our players- Plugins don't even seem to be using much CPU
spark
spark is a performance profiler for Minecraft clients, servers, and proxies.
17 Replies
Admincraft Meta
Admincraft Metaโ€ข4w ago
Thanks for asking your question!
Make sure to provide as much helpful information as possible such as logs/what you tried and what your exact issue is
Make sure to mark solved when issue is solved!!!
/close !close !solved !answered
Requested by nikkilectric#0
Admincraft Meta
Admincraft Metaโ€ข4w ago
An error has occurred!
ReferenceError: version is not defined
at analyzeProfile (/home/ubuntu/admincraft/admincraft-meta/functions/analyzeProfile.js:174:3)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.execute (/home/ubuntu/admincraft/admincraft-meta/commands/profile.js:16:26)
ReferenceError: version is not defined
at analyzeProfile (/home/ubuntu/admincraft/admincraft-meta/functions/analyzeProfile.js:174:3)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.execute (/home/ubuntu/admincraft/admincraft-meta/commands/profile.js:16:26)
This was most likely an error on our end. Please report this at the github
Buddy
Buddyโ€ข4w ago
is this your own dedi or ? Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz doesn't sound that fast
Nikki
NikkiOPโ€ข4w ago
Hetzner Dedi- Its only used by me
Buddy
Buddyโ€ข4w ago
looks overkill
No description
Buddy
Buddyโ€ข4w ago
for the rest i don't see anything sus
Nikki
NikkiOPโ€ข4w ago
Hmmm okay, is it normal for non-heap memory to be almost maxed out pretty much all the time? (also thank you-)
๐๐ข๐ช๐ฎ๐š๐ญ๐ž๐ซ๐ง๐ข๐จ๐ง๐ฌ
Itโ€™s actually good that non heap is maxed out, but you do have too much ram which will make garbage collections take longer
Nikki
NikkiOPโ€ข4w ago
Oh okay! Ty!
๐๐ข๐ช๐ฎ๐š๐ญ๐ž๐ซ๐ง๐ข๐จ๐ง๐ฌ
So far seems like the most time consuming task is chunk load, which is expected If you want more insight, run the spark profiler with the flag --only-ticks-over 150 when you feel itโ€™s starting to lag Also, regarding the ram, you might want to reduce it or change the gc in the startup flags
Nikki
NikkiOPโ€ข4w ago
Alrighty! Ty! Change the GC? What would I change it to?
Carl-bot
Carl-botโ€ข4w ago
Garbage Collection is an important aspect of Java
Usually you'll want to fine-tune your server's GC to maximize efficiency but minimize the negative effects (e.g. stuttering). The baseline recommendation is to use G1GC, you should follow aikar's recommendations (a Paper core dev) on how to tweak it. For contemporary java versions, there's also ShenandoahGC with even shorter pauses and smaller CPU overhead, hilltty's flagset is the usually recommended. If you feel adventurous, you can also try out the latest of the three, ZGC. It is not yet spread wide, actually digging up a reference was somewhat hard. If you want to read a general overview of Java GCs, there's this one in the making.
Admincraft Canned Responses
Nikki
NikkiOPโ€ข4w ago
Thanks!
๐๐ข๐ช๐ฎ๐š๐ญ๐ž๐ซ๐ง๐ข๐จ๐ง๐ฌ
Shenandoah and ZGC perform better for large amounts of ram, tho Iโ€™ve learned that zgc is mostly experimental Final note, LightCleaner and BKCommonLib taking 10% of the main thread for just 3 people is too much Not likely to be the culprit, but you might want to keep an eye on those
Nikki
NikkiOPโ€ข4w ago
Ofc! Tysm!!
Admincraft Meta
Admincraft Metaโ€ข4w ago
Closed post!
Your post has been marked as solved!
Requested by nikkilectric#0

Did you find this page helpful?