Cryptic Neptune Gremlin Error Rate Creeping - What Would You Recommend?

This relates more to do with Neptune usage, nevertheless, it is also related to the Gremlin Query error rate that logs in to the Monitor plot page and also triggers cloud watch alert in our case. Situation: We've noticed a gradual increase in unexplained Gremlin error counts — with a few popping up every several hours. Actions Taken: We attempted to pinpoint the cause by checking for gremlin error exceptions in our internal logs. However, no related errors were detected around the same time frames when the cloud watch log indicated issues in the Neptune Gremlin Audit logs. While we acknowledge occasional concurrency problems in our system, the timing of these doesn't align with the reported Gremlin count. Concern: Our primary concern stems from the absence of a "traditional event log" on the serverless instance of the Neptune server we use. This makes it challenging for us to correlate potential causes behind these logs. We're left wondering whether these discrepancies might be due to some oversight on our end. Request for Guidance: Is reaching out to AWS support the best course of action regarding this issue? Or should we consider it a minor hiccup and proceed as usual? Any insights or suggestions would be greatly appreciated. P.S: Last night I have rebooted the entire cluster and it appears the issue subsided. Running the same level of production access today. It just logged another 1 error increase.
1 Reply
triggan
triggan12mo ago
@ManaBububu - Yep, just open a support case and they can take a look to see what might be causing the errors.
Want results from more Discord servers?
Add your server