@DiscordJS/WS cache

By default, does djs/ws cache anything? for context I have GuildPresences, Guilds, and GuildMembers intents enabled.
19 Replies
d.js toolkit
d.js toolkit5w ago
- What's your exact discord.js npm list discord.js and node node -v version? - Not a discord.js issue? Check out #other-js-ts. - Consider reading #how-to-get-help to improve your question! - Explain what exactly your issue is. - Post the full error stack trace, not just the top part! - Show your code! - Issue solved? Press the button! - Marked as resolved by OP
Jaymart
Jaymart5w ago
I would assume not seeing as it's a barebones but I'm somehow running at like 20 GBs or more
Unknown User
Unknown User5w ago
Message Not Public
Sign In & Join Server To View
Jaymart
Jaymart5w ago
20k guilds, last I checked was somewhere around 20 million users im currently rewriting it from python lol, so the bot itself is very miniscule so im not sure why it's eating up so much ram unless there is stuff being cached by default
Jaymart
Jaymart5w ago
like.
No description
Jaymart
Jaymart5w ago
all its doing is receiving presence update and pushing it to rabbitMQ thats it unless somehow the data im pushing is staying within this instance? for more context
import { Broker } from '@vanityroles/broker';
import { WebSocketManager } from '@discordjs/ws';
import { GatewayDispatchEvents, GatewayIntentBits, Client } from '@discordjs/core';
import { REST } from '@discordjs/rest';
import { createClient } from 'redis';
import dotenv from 'dotenv';

dotenv.config();

// Set up your REST client
const rest = new REST({ version: '10' }).setToken(`${process.env.DISCORD_TOKEN}`);

// Initialize your broker
const broker = new Broker();

// Create Redis client
const redisClient = createClient({ url: process.env.REDIS_PRIVATE_URL });

const connectRedis = async () => {
try {
if (!redisClient.isOpen) {
await redisClient.connect();
}
} catch (error) {
console.error('Failed to connect to Redis:', error);
}
};

const getClusterId = async (): Promise<number> => {
await connectRedis();
const clusterId = await redisClient.incr('cluster_id_counter');
return clusterId - 1; // Adjusting for zero-based indexing
};

const getShardIds = (totalShards: number, clusterId: number, clusters: number): number[] => {
const shardsPerCluster = Math.ceil(totalShards / clusters);
const shardIds = [];

for (let i = 0; i < shardsPerCluster; i++) {
const shardId = clusterId * shardsPerCluster + i;
if (shardId < totalShards) {
shardIds.push(shardId);
}
}

return shardIds;
};

const delay = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));

const main = async () => {
try {
await connectRedis();

const totalShards = Number(process.env.TOTAL_SHARDS) || 15;
const totalClusters = Number(process.env.TOTAL_CLUSTERS) || 3;

// Get cluster ID from Redis
const clusterId = await getClusterId();

const shardIds = getShardIds(totalShards, clusterId, totalClusters);
const shardCount = totalShards;

console.log(`Cluster ID: ${clusterId}. Shard IDs: ${shardIds}. Shard Count: ${shardCount}`)

// Calculate the delay based on the clusterId
const delayInterval = Number(process.env.DELAY_INTERVAL_MS) || 5000; // Default interval of 5 seconds
const delayMs = clusterId * delayInterval;
console.log(`Cluster ID: ${clusterId}. Waiting for ${delayMs}ms before starting the gateway...`);
await delay(delayMs);

// Configure the gateway with sharding information
const gateway = new WebSocketManager({
token: `${process.env.DISCORD_TOKEN}`,
intents: GatewayIntentBits.GuildPresences,
rest,
shardCount,
shardIds,
});

// Initialize the client
const client = new Client({ rest, gateway });

// Event handler for presence updates
client.on(GatewayDispatchEvents.PresenceUpdate, async ({ data: interaction }) => {
try {
await broker.sendToQueue('presence', { data: interaction });
} catch (error) {
console.error("Failed to handle message: ", error);
}
});

// Event handler for when the bot is ready
client.once(GatewayDispatchEvents.Ready, () => console.log("Ready!"));

// Connect to the gateway
await gateway.connect();
console.log("Gateway connected");
} catch (error) {
console.error('Failed to start bot:', error);
} finally {
// Close Redis client only after all operations are done
await redisClient.quit();
}
};

void main();
import { Broker } from '@vanityroles/broker';
import { WebSocketManager } from '@discordjs/ws';
import { GatewayDispatchEvents, GatewayIntentBits, Client } from '@discordjs/core';
import { REST } from '@discordjs/rest';
import { createClient } from 'redis';
import dotenv from 'dotenv';

dotenv.config();

// Set up your REST client
const rest = new REST({ version: '10' }).setToken(`${process.env.DISCORD_TOKEN}`);

// Initialize your broker
const broker = new Broker();

// Create Redis client
const redisClient = createClient({ url: process.env.REDIS_PRIVATE_URL });

const connectRedis = async () => {
try {
if (!redisClient.isOpen) {
await redisClient.connect();
}
} catch (error) {
console.error('Failed to connect to Redis:', error);
}
};

const getClusterId = async (): Promise<number> => {
await connectRedis();
const clusterId = await redisClient.incr('cluster_id_counter');
return clusterId - 1; // Adjusting for zero-based indexing
};

const getShardIds = (totalShards: number, clusterId: number, clusters: number): number[] => {
const shardsPerCluster = Math.ceil(totalShards / clusters);
const shardIds = [];

for (let i = 0; i < shardsPerCluster; i++) {
const shardId = clusterId * shardsPerCluster + i;
if (shardId < totalShards) {
shardIds.push(shardId);
}
}

return shardIds;
};

const delay = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));

const main = async () => {
try {
await connectRedis();

const totalShards = Number(process.env.TOTAL_SHARDS) || 15;
const totalClusters = Number(process.env.TOTAL_CLUSTERS) || 3;

// Get cluster ID from Redis
const clusterId = await getClusterId();

const shardIds = getShardIds(totalShards, clusterId, totalClusters);
const shardCount = totalShards;

console.log(`Cluster ID: ${clusterId}. Shard IDs: ${shardIds}. Shard Count: ${shardCount}`)

// Calculate the delay based on the clusterId
const delayInterval = Number(process.env.DELAY_INTERVAL_MS) || 5000; // Default interval of 5 seconds
const delayMs = clusterId * delayInterval;
console.log(`Cluster ID: ${clusterId}. Waiting for ${delayMs}ms before starting the gateway...`);
await delay(delayMs);

// Configure the gateway with sharding information
const gateway = new WebSocketManager({
token: `${process.env.DISCORD_TOKEN}`,
intents: GatewayIntentBits.GuildPresences,
rest,
shardCount,
shardIds,
});

// Initialize the client
const client = new Client({ rest, gateway });

// Event handler for presence updates
client.on(GatewayDispatchEvents.PresenceUpdate, async ({ data: interaction }) => {
try {
await broker.sendToQueue('presence', { data: interaction });
} catch (error) {
console.error("Failed to handle message: ", error);
}
});

// Event handler for when the bot is ready
client.once(GatewayDispatchEvents.Ready, () => console.log("Ready!"));

// Connect to the gateway
await gateway.connect();
console.log("Gateway connected");
} catch (error) {
console.error('Failed to start bot:', error);
} finally {
// Close Redis client only after all operations are done
await redisClient.quit();
}
};

void main();
DD
DD5w ago
nah, this is on you most likely you're not acking the rmq messages or smth idk by default messages stay in queues until consumed and acked, yes also side note but the way you're handling delays could be better use buildIdentifyThrottler and write a custom one
Jaymart
Jaymart5w ago
they are tho according to the UI.
No description
DD
DD5w ago
what is that mem chart you showed is it server wide usage
Jaymart
Jaymart5w ago
railway and no container only
DD
DD5w ago
can't really tell then there's an off chance /ws has some sort of memory leak but I doubt it it does not cache anything
Jaymart
Jaymart5w ago
yeah that's what I figured ima disable the event and see what happens
DD
DD5w ago
there's bots bigger than yours using it that run fine
Jaymart
Jaymart5w ago
yeee haha was ab to say i hope im not doing that lmao i mean the UI shows them being consumed fast enough but is it possible that in real time they aren't actually? and that's causing the build up of mem?
Jaymart
Jaymart5w ago
i say that because the massive dips in memory
No description
Jaymart
Jaymart5w ago
at one point in there it went from 8gb and jumped to 25 and it does that until it just hits mem limit
Jaymart
Jaymart5w ago
so I was logging as well to see if it matched up.
No description
Jaymart
Jaymart5w ago
it's total used between the 3 instances okay, i disabled the broker/queue and it is indeed the issue. I dont think there are enough consumers even tho the UI is reporting so
Jaymart
Jaymart5w ago
No description