Issue with application commands after logging in after client#destroy()

If I login after doing client.destroy(), the bot logs in fine, but the application commands aren't registered again. Am I doing something wrong? Some example code:
client = new SapphireClient({
intents: Constants.INTENTS,
partials: Constants.PARTIALS,
presence: Constants.PRESENCE,
});
ApplicationCommandRegistries.setDefaultBehaviorWhenNotIdentical(
RegisterBehavior.Overwrite,
);

await client.login(process.env.TOKEN);
await client.destroy();

client = new SapphireClient({
intents: Constants.INTENTS,
partials: Constants.PARTIALS,
presence: Constants.PRESENCE,
});
await client.login(process.env.TOKEN);
client = new SapphireClient({
intents: Constants.INTENTS,
partials: Constants.PARTIALS,
presence: Constants.PRESENCE,
});
ApplicationCommandRegistries.setDefaultBehaviorWhenNotIdentical(
RegisterBehavior.Overwrite,
);

await client.login(process.env.TOKEN);
await client.destroy();

client = new SapphireClient({
intents: Constants.INTENTS,
partials: Constants.PARTIALS,
presence: Constants.PRESENCE,
});
await client.login(process.env.TOKEN);
The first login I get (I am logging Bot has connected on the ready event):
ApplicationCommandRegistries: Initializing...
Bot has connected.
ApplicationCommandRegistries: Took 6ms to initialize.
ApplicationCommandRegistries: Initializing...
Bot has connected.
ApplicationCommandRegistries: Took 6ms to initialize.
But on the second login I only get:
Bot has connected.
Bot has connected.
So it seems like ApplicationCommandRegistries isn't being initialised again. Should I be recreating the SapphireClient, or is there another way I should be doing this? Thanks!
19 Replies
Favna
Favna•4mo ago
I'd first like to know what your use case is for this because I can't think of any
Fozzie
FozzieOP•4mo ago
Admittedly it is a strange one, and a concept I haven't yet gotten working so it's still all in theory! I want to set up 3 replicas in Kubernetes, but obviously don't want to run 3 instances at the same time. Sharding won't work for me because it's a bot in a single guild. I am trying to setup an election concept where a pod is designated as a "leader". This pod may be demoted and re-promoted many times, hence the destroy and the login. I could just process.exit when it's demoted, and wait for Kubernetes to restart the pod, but that doesn't seem ideal if the health check still passes so the pod is otherwise still functioning normally.
Favna
Favna•4mo ago
So I know this doesn't exactly answer your question but rather is a diversion of the topic but ultimately I want to give you the best advice I can. I've always been a fervent proponent of saying the discord.js (and by extension sapphire) is not the right library to use for such complex landscapes because of how Discord gateways work. You're far better off having a landscape where you - Have 1 pod of a message queueing system (i.e. KubeMQ (k8s native), rabbitmq, or kafka) - Have 1 pod of a discord gateway/api connector that has the exclusive task of dumping messages on the incoming queue. This can use @discordjs/core as a minimal discordjs package without the discordjs overhead. This also needs 0 caching, which afaik is not part of /core - Note that if you're using interaction based commands you probably do want this pod to set the interaction deferral because you're adding a bit of networking overhead here and you cannot guarantee replying in 3 seconds. - Note that this should be only 1 instance because otherwise you will get race conditions, maybe unless you do a bunch of k8s magic but I wouldn't know about that. - Have N pods of workers that pick messages from the message incoming queue, process them, and put them on the outgoing queue - Have N pods of processors that pick messages from the outgoing queue and send out the replies to Discord. These could use @discordjs/core or maybe even @discordjs/rest (in particular if you only use interaction based commands)
Fozzie
FozzieOP•4mo ago
I appreciate you going through that, that's certainly a much better design. I do wonder how it would handle Kubernetes node failures though if you're limited to the single pod with the API connector. Especially if this is a StatefulSet which I imagine it would be to avoid concurrency issues, then Kubernetes wouldn't automatically reschedule those in the event of node failures. Edit: I guess this might be fine actually, if you just have 3 replicas and load balance messages across them all. I 100% understand that my solution is a hack (I think all mutexes / locks / elections like these are), but at 20 lines which kinda does work (aside from the applications issue) and with my limited experience of the Discord API, I think it would handle Kubernetes node failures fine. Obviously yours would definitely be better, but I don't have the experience to even attempt that right now. Is it by design that Sapphire doesn't support re-logging in with the application commands? If not I'm happy to take a look at the repo when I have time to see if it can be solved.
Favna
Favna•4mo ago
It's not by intention. It's never been brought up before.
Fozzie
FozzieOP•4mo ago
Fair enough, I did see message events going through, so it does seem to just be that ApplicationCommandRegistries initialisation.
Favna
Favna•4mo ago
My best guess is that while you construct a new sapphire client, you do so to the same client variable which already has a pointer to an instance of the class and something funky happens with the constructions of the class But I cannot guarantee at all that if you create a new variable it will just magically start working I also think the container doesn't get fully reinitialised because of its dependency injection nature with your setup which might be related you could try to add a line container = {} after destroy to test that
Fozzie
FozzieOP•4mo ago
Yeah I'll have a play around tomorrow. I appreciate your pointers, sounds like a fun weekend project for me 😂
Favna
Favna•4mo ago
and you may also want to add
container.stores.forEach(store => {
store.clear()
}
container.stores.forEach(store => {
store.clear()
}
even before that line considering that Map's are definitely stored like a cache in ES6
Fozzie
FozzieOP•4mo ago
Yeah, I was just looking through the code, I imagine the issue is somewhere like container.stores.loadPiece where it's already loaded
Favna
Favna•4mo ago
oh yeah there is also store.unloadAll ofc
Fozzie
FozzieOP•4mo ago
This could also be avoided if the Discord API just cut off any previous clients when a new one logged in ¯\_(ツ)_/¯ Unless there is an API method to do that that I don't know about?
Fozzie
FozzieOP•4mo ago
I did come across this serenity issue which suggests that there is not https://github.com/serenity-rs/serenity/issues/1054
GitHub
Possible to prevent multiple instances of bot from running at the s...
Is it possible to prevent multiple instances of a bot from running at the same time? I looked through the docs, but wasn't able to figure out how.
Fozzie
FozzieOP•4mo ago
I guess you could regenerate your token every time you started a new instance, although not sure if there's an API method for that!
Favna
Favna•4mo ago
there isnt @Fozzie if / when you figure out what it was can you post it so we can mark it as the answer?
Fozzie
FozzieOP•4mo ago
Will do, had a brief try yesterday with the unloadAll / clear but it didn't work
Favna
Favna•4mo ago
@Fozzie any updates?
Fozzie
FozzieOP•4mo ago
Sorry, pretty busy atm so I won't be able to spend time on it for another week or so
Favna
Favna•4mo ago
Ping me if you have an update so I can unfollow this thread
Want results from more Discord servers?
Add your server