How to speed up the asynchronous multiple request in a single fetch link?
The code that I will post is working properly but the only problem that I did encounter, it is not yet optimized for fetching a lot of chunks data it tends to overload/lag. What kind of fetching should I use that is optimize for fetching a lot of data? I am clueless now. Thanks for the help guys ! 🙂
Note:
Backend: PHP 5.6
Databse: MongoDB 5.6
24 Replies
This is the table where I want to append all data that will be get in the server
Pastebin
Fetching of Data - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
This is the way how I fetch data in the database
This is what the data fetch in the database.
https://pastebin.com/7VQRAyzq This is the Customers Class and the function and query I am trying to fetch
Pastebin
Customer Class - Pastebin.com
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
hello @jochemm can you take a look on it? I will try to explain it further
where does it hang? I don't really have the time to look at it in detail, but I don't necessarily see anything wrong?
yes there is no wrong on it as of now
but the problem is it tends to lag the server
it is not yet efficient I don't what question should I google
Proper question I mean
If you will try to take a peak on my js I did do while for fetching the same link multiple times
I dont know it if its well optimize or not
you're
await
ing the fetch, so it won't run more than one at a time. If it's lagging the server, it might be because of the backend and not the frontendDo you know how can I put delay on it?
that will wait for 2 seconds (2000ms) and then continue to the next line
that's probably the quickest, easiest way to just introduce a delay
but again, if your server is getting overloaded, this code running on one browser isn't going to be the issue. If there's a lot of rows in that customer table, fetching rows further and further down is going to take longer and longer, depending on how the table's been indexed, how many joins there are in the query...
There a lot of joins and query I did in backend ( Not me ) but I am just continuing it because the first programmer doesnt want me to altered it
So doing that kind of fetch doesnt affect the code right?
not sure what you mean by that?
as you can see on how I did fetch it
I did a do while loop
take a look at this
okay, but what do you mean by "affect the code"?
I did it because I want to to get the customer length
I mean the running time
to load the customers
I am iterating on a single link only
trying to get the customer length at first
ah, the while loop doesn't change how long each fetch will take, no
as for getting the total customer count, this is a really inefficient way of getting that. The server can do that much more quickly if it has an endpoint for that, or if it just includes that data in each returned request
each fetch does only get 20 customers then it skips the data in the query for example at first it is skip(20) then skip (40) then and so forth
the counter I did is the while loop
if you can't change the backend, then you're stuck doing it this way, but it's a problem that's much better solved on the server
I can change some backends but not totally alter the criteria of getting the data
so what can you suggest on getting the length of customer?
mongodb is my database
php as backend
doing it on the server is the correct solution.
If you want to optimize this, and you're discarding the fetched customer data anyway, you could increase the skip a lot, and then backtrack in large steps until you find the correct last page...
Say you have 1400 records, and you need to count.
First fetch for display data gives you 20 records, so you know there's at least 20.
The second fetch sets skip to 1000. You still get 20, so you know there's at least 1000 records.
Third, you set it it 2000 (or 10,000, how big these steps are should be determined by the size of your dataset), but you get 0 results, so you know there's less than 2000
You can then take half the difference, and check at 1500, still 0, so then half that difference with the last time you got results, and you get 20 back at skip 1250.
Then you go up again, to halfway between 1250 and 1500, so at 1375 you still get 20 results.
1500-1375 is 125, so you check at 1437 and get 0
You can keep halving the distance you check, until you get a result. If you tune it correctly, you could go from 70 fetches to find 1400 rows, to maybe half a dozen?
but that's definitely a hack, and the answer should really be to ask the backend dev to provide you with that count, either in each query, or from a separate endpoint
So your basically trying to tell me to not get the length of the customer but rather guess it using algorithm or what
guess is a big word, you just use a different search algorithm to get to the correct answer
you'll have to make a sane guess for the average number of rows returned.
you pretty much double the difference between your first guess and the next every time you find rows still, and halve the difference every time you don't, and you can zero in on the exact number pretty quickly.
you still end up with the exact correct number in the end, but instead of downloading all your customers every time, doing lots of calls, you try to limit the number of calls you do and only take samples
don't get me wrong though, it's still a bad solution to the problem, it'll just be a quicker bad solution than the naive linear approach
Yeah thanks for the idea
hello jochem do you know if async can tend to lag the dom whenever appending data in the table?
it shouldn't