Redis Timeout
using Stackexchange Redis results in timeout without proper exception message. I tried increasing timeouts and distributing load to other nodes and increased the thread count. It hits timeout on peak hours and the highest payload size is around 7 MB and its called frequently and Im gonna compress it but I want some opinion on how to check the real error and I can't access the redis directly to view SLOW LOGS
28 Replies
7mb payload? in redis? That seems really large.
Yes we're trying to compress it
But we want to make sure is that is the real problem
Good idea
How to find the real error?
Accessing the logs would be the ideal way
but you say thats not an alternative
I can't access the redis for now currently because its behind weird security group in aws
hm, no good ideas then. the client cant tell why the server timed out, thats just the nature of a serverside error
but you have the right idea, payload size is huge and potentially the source of your error, so verifying that would be a great start
Sure will compress and update incase the error occurs again
BTW Thanks you really gave me the validation I needed
np
we use redis a lot at my work, also via the StackEx library
and we've never had any problems with it, but the largest individual value we store is prob ~100kb
Thats the minimum of ours
Also can you kindly look at our redis connection class, cause we had doubts on configuration issue as well
var options = new ConfigurationOptions
{
EndPoints = { endpoint },
ConnectRetry = 1, // Retry connection only once in case of failure
ConnectTimeout = 3000, // Set connection timeout to 3000ms
Ssl = true, // Enable SSL if required
IncludeDetailInExceptions = true,
};
// Establish the connection using the configured options
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(options);
var cache = ReadRedisConnectorHelper.Connection.GetDatabase();
try
{
// Try to read from the replica
data = cache.StringGet(ParamName, CommandFlags.PreferReplica);
}
catch (Exception ex)
{
Console.WriteLine("Replica access failed, trying primary node: " + ex.Message);
// Fallback to primary node if the replica fails
data = cache.StringGet(ParamName, CommandFlags.DemandMaster);
}
seems fine at a glance. We use it to back
IDistributedCache
and use it all via DIwill study on that
worth noting is that connectTimeout isnt the actual transaction timeout
its just for establishing the connection
so in ELI5 terms, If the connection can't be made within 3000ms, it'll throw an error
but its likely that the connection works, but you are getting a response or sync timeout
try setting
syncTimeout
to a higher value
(asyncTimeout defaults to syncTimeout, so that should set both)
default is 5000so for now our API is currently delivering at 3secs so increasing this will result in delay but that will fix the problem temporarily until we compress the data ?
I don't know, but its worth testing
it will turn errors into potential responses, albeit slower
it will not affect your current response times for working calls
sure
Replica access failed, trying primary node: Timeout performing GET (5000ms), next: GET MNDAPIRECORDSAffliate, inst: 0, qu: 0, qs: 0, aw: False, bw: Inactive, rs: ReadAsync, ws: Idle, in: 0, last-in: 50441, cur-in: 647420, sync-ops: 4, async-ops: 1, serverEndpoint: dev-demo-0001-002.dev-demo.lketx2.euw2.cache.amazonaws.com:6379, conn-sec: 17.42, aoc: 1, mc: 1/1/0, mgr: 10 of 10 available, clientName: 169(SE.Redis-v2.6.111.64013), PerfCounterHelperkeyHashSlot: 1992, IOCP: (Busy=0,Free=1000,Min=2,Max=1000), WORKER: (Busy=1,Free=32766,Min=2,Max=32767), POOL: (Threads=6,QueuedItems=0,CompletedItems=3022), v: 2.6.111.64013 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
This is our error
yeah so you are hitting the
AsyncTimeout
limit of 5000 msSo will increase the timeout. Until fix, delay won't be a problem.
exactly
a slow response is probably better than an error
so for now, bump that up a few sec see if it works
then you know for sure what was the problem and can work on reducing payload size etc
This is a valid point to help my argument with senior
You have no idea how much of embrassment you saved me from
a 100 ms response will still be a 100 ms response.
but a >5000 ms response was previously an error, now it might be a response, and you can troubleshoot from there
Im not suggesting you keep a really high timeout permanently, but verifying your ideas is important
For debugging as I can't currently access the redis directly, it'll be temporarily nice to increase timeout as soon as I compressed data and I can revert back to investigate the issue in this new state and will further deduct more
exactly
will check and update
Unknown User•3w ago
Message Not Public
Sign In & Join Server To View
Absolutely right. But as per your last advice I compressed using GZIP and it takes additional couple of seconds to decompress to work on it but no issues on redis as far as now
I don't really have knowledge about MessagePack contractless or ProtoBuff will study and let you know after implementing with that