slow performance and 500 errors with large data
I've looked over older questions and just want to clarify if this is something to be optimized in v4?
In my loadtest I have 20,000 users, and it returns a 500 error. And if I downset to around 15,000 users, the page will load but runs extremely slow.
Is there anything available to optimize that I may be missing?
35 Replies
Example of my table
What is this doing?
canManageAnyOrganisations
I'\d also look at:
Because you are calling
createActionGroup
and we can't see what that is.
But I have 1,000,000 users and my performance is good.
providing of course you are loading only 25-50 at a time
Ensure you have indexed the users correctly too.
I downset to around 15,000 users, the page will load but runs extremely slow.Just to make sure. Do you have 15k users in DB and issues, or are you trying to load 15k users at once?
So default pagination to the page is set to 25, not loading at once.
How am I checking this?
The function is this:
Also, the canManageAnyOrganisations function is just checking that data exists.
So yeah, dont believe this is lazy loaded, could be the reason?
So what you need to do is install Telescope, then check the request to that page and debuig the queries, you'll like be looping somewhere or a really poor indexed relation etc. Nothing should take more than a second. I've jsut seeded my DB with 10,000,000 records and the page still loads in 2seconds. Technically it shouldn't really slow it down if an index is there.
Yeah mines like 10 seconds with 18k users, so something must be with my code.
Ill install telescope now
Yeah. I think its slow 🤣
Just a little
Maybe to do with this I think then
Yep...
So click into it, you wil be loading a user in again Imsuspect on each query
First remove the actions to see if that solves, it then add them back one at a time if it does.
failing that, remove the modify query and see if that helps
Im down to 700ms (still bad ik), when I take out this
So, I think its the relation to roles...
Good work so far though!
Whenever I try to load the relationship, thats causing the delay.
Was this a fix along the lines of indexing like you mentioned?
edger load the relationship 😉
But also look at the realtionship, are you using a good package with it?
I've tried with this..
THis jumps back up tyo 3 seconds
Using the althinect spatie roles package
So that should be well indexed as it's spatie, I use it too.
You loaded with roles so that's good. The user managedOransiations is like the issue....
you are plucking constantly
even if I just return $query->with('roles'); none of the extra stuff, it still jumps to 3 seconds
TYhen it has a lot of uuids. THis is the main delay
that's an additional query for each. You want to with the managedOrganisations on the user. I would put that outside. so
The roles makes sense, double check the index on there, but it looks normal to me
So tried this, still 3 seconds.
THe select roles query specifically is 160.00ms
So 3 seconds isn't terrible. But feel free to look and optimise each bit as much as you can 🙂
❤️
ChatGPT/Claude is actually quite good is you pass it your whole query and ask for optimisation ideas if you are nto a DB expert and it can make suggestions etc.
Sometimes takes a bit of probing with ChatGPT, but already on it with it 🤣
So managed to get it to 800ms, still a little slow on page load for what I'd like, but hey ho.
These are the changes
Nice
Just a quick question with this. Im still only on 170k users as seeidng is taking ages). But do you use default pagination or simple?
I've noticed default takes longer due to the count query
Seeding is taking ages? Are you on a mac?
I use default, but default does count so it is slower as it needs to work on the pages. Simple doesn't 😉
Yeah thats understandable.
Nah, on linux.
I keep getting hit by memory limit when trying to import large, so im currently on a smaller chunk and just taking ages to seed
Linux should be realtivel fast, what is your use seeding looking like? Inserting right?
So currently using this
I've tried using make() instead of create to then use $users->chunk, but that was complaining about timestamp format.
This is my factory
Just keep getting hit with memory limit randomly during it all
Ahh that's why your factories are slow in comparison with big data. I would tend to run a DB insert
Ah, so typically avoid factories for large inserts. Ill change and give that a go
Exactly, factories are great for complex relational data. But they use the model and model events which means it slows everything down with big data.
Ill change that over and give it a run.
Would you still run it in a for loop typically?
Not as an insert no, bulk insert
so loop throught he code to build the inserts in say 1000 at a time, and insert the 1000
Got to 21k inserts, then memory limit exhausted again 🤣
Ill just re-run several times still
Oh wait, I've read it wrong still...
However, next thing I've found is actions. Specifically edit and delete which have some model condition checks. These seem to add 200ms onto the call, is there anyway to optimise these, or is this the best way?