Issues in SE region causing a massive amount of jobs to be retried

The issues in the screenshot are causing 10% of my jobs to be retried in SE region. Please fix this, its not happening in CA region.
No description
20 Replies
digigoblin
digigoblinOP8mo ago
Obviously I am referring to the "Connection timeout" errors which causes the job results to fail to be returned, and not the single exeption among them.
Madiator2011
Madiator20118mo ago
@digigoblin DO YOU MIND SUBMITING AS TICKET ON WEBSITE EASIER TO ESCALATE
digigoblin
digigoblinOP8mo ago
No need to shout but sure 😁
Madiator2011
Madiator20118mo ago
ups sorry for caps
digigoblin
digigoblinOP8mo ago
Ticket number is 4208
Madiator2011
Madiator20118mo ago
done
digigoblin
digigoblinOP8mo ago
Thank you
nerdylive
nerdylive8mo ago
hahaha wait SE? my jobs works well btw
digigoblin
digigoblinOP8mo ago
You probably didn't try and send 1000 jobs today
nerdylive
nerdylive8mo ago
Yes yes
digigoblin
digigoblinOP8mo ago
I said 10% are retried NOT ALL 🤦‍♂️
nerdylive
nerdylive8mo ago
im using dev on SE Ooh so 10% expected to fail?
digigoblin
digigoblinOP8mo ago
They are retried they don't fail
nerdylive
nerdylive8mo ago
well goodluck on your problem
digigoblin
digigoblinOP8mo ago
RunPod needs to check it out, I switched to CA in the meantime and it works fine without any issues.
nerdylive
nerdylive8mo ago
yeah great to hear
digigoblin
digigoblinOP8mo ago
I was using CA but then switched to SE because my jobs were failing, but it was actually because my own Redis server had OOM issues due to running out of memory and wasn't a RunPod issue. So I upgraded my ElastiCache instance on AWS from cache.t3.medium to cache.m4.large and now its fine.
nerdylive
nerdylive8mo ago
Wow you use elasticache? why not self hosted redis
digigoblin
digigoblinOP8mo ago
Because its a cluster not a single instance
nerdylive
nerdylive8mo ago
oh ic
Want results from more Discord servers?
Add your server