Job Hanging/Not Working when processing a large loop

I'm running a job on my database that current involves iterating over 800 elements and making an api call on each one, but it never gets past the first iteration, and nothing logs out. Is there anything it looks like I'm doing wrong? I've cut out some parts of the code to save on message length restrictions (mainly irrelevant variables)
export const updatePlatformStats = async (_args, context) => {
const unverifiedUsers = await context.entities.UnverifiedUser.findMany({
where: {
platformUsername: {
not: null,
},
},
});
for (let user of unverifiedUsers) {
const fullResponse = await fetch(
`https://api.someapi.com/v1/profile-videos?handle=${user.platformUsername}&amount=10
);

if (!fullResponse.ok) {
continue;
}

const videos = await fullResponse.json();
const formattedVideos = Array.from(videos).map((video: any) => {
return {
hashtags: video.challenges.map((challenge) => challenge.title),
stats: video.stats || null,
description: video.desc,
createdAt: video.createTime || null,
};
});
try {
const res = await context.entities.UnverifiedUserStats.upsert({
where: {
userId: user.id,
},
update: {
totalEngagementRate: parsedEngagementRate,
averageViews,
labelsUsed: [],
hashtagsUsed,
lastVideoDate,
},
create: {
totalEngagementRate: parsedEngagementRate,
averageViews,
labelsUsed: [],
hashtagsUsed,
lastVideoDate,
user: { connect: { id: user.id } },
},
});
} catch (error) {
console.error('Database upsert failed:', error);
}
}
};
export const updatePlatformStats = async (_args, context) => {
const unverifiedUsers = await context.entities.UnverifiedUser.findMany({
where: {
platformUsername: {
not: null,
},
},
});
for (let user of unverifiedUsers) {
const fullResponse = await fetch(
`https://api.someapi.com/v1/profile-videos?handle=${user.platformUsername}&amount=10
);

if (!fullResponse.ok) {
continue;
}

const videos = await fullResponse.json();
const formattedVideos = Array.from(videos).map((video: any) => {
return {
hashtags: video.challenges.map((challenge) => challenge.title),
stats: video.stats || null,
description: video.desc,
createdAt: video.createTime || null,
};
});
try {
const res = await context.entities.UnverifiedUserStats.upsert({
where: {
userId: user.id,
},
update: {
totalEngagementRate: parsedEngagementRate,
averageViews,
labelsUsed: [],
hashtagsUsed,
lastVideoDate,
},
create: {
totalEngagementRate: parsedEngagementRate,
averageViews,
labelsUsed: [],
hashtagsUsed,
lastVideoDate,
user: { connect: { id: user.id } },
},
});
} catch (error) {
console.error('Database upsert failed:', error);
}
}
};
13 Replies
kapa.ai
kapa.ai2w ago
Hi @CamBlackwood. I'm kapa.ai, your friendly support bot. I will try to answer your question by searching through the knowledge base. :wave: ---------------------- Based on the provided code, it seems like you're doing a lot of work within a single job, including making API calls and interacting with the database. This could potentially lead to the job hanging if there are any issues with the API or the database operations. Here are a few things you could consider: 1. Error Handling: Make sure you have proper error handling in place. For example, if the fetch call fails for any reason, it could potentially cause the job to hang. You might want to wrap your fetch call and other operations in a try/catch block to handle any errors that might occur. 2. API Rate Limiting: If you're making a lot of API calls in a short amount of time, you might be hitting the API's rate limit. This could cause the API to start rejecting your requests, which could in turn cause your job to hang. You might want to check if the API you're using has any rate limits, and if so, make sure you're not exceeding them. 3. Database Operations: If you're doing a lot of database operations in a single job, this could potentially cause the job to hang, especially if any of the operations are slow or if there are any issues with the database. You might want to consider breaking up your job into smaller jobs that each do a smaller amount of work. 4. Job Persistence and Retries: Wasp supports job persistence and retries. If a job fails, it can be retried. This could be useful if there are any temporary issues that cause the job to fail. 5. Logging: If you're not seeing any logs from your job, you might want to add more logging to help debug the issue. For example, you could log the progress of the job at various points to see how far it's getting before it hangs. Remember, Wasp is still in the early stages of supporting background jobs, and there might be some limitations or issues that need to be worked out. If you continue to have issues, you might want to consider reaching out to the Wasp community for help. Sources: - Wasp Docs: Advanced > Recurring Jobs (https://wasp-lang.dev/docs/advanced/jobs) - Github Issue: Improve access to Entities in Jobs (https://github.com/wasp-lang/wasp/issues/607) - Github Issue: Support for background jobs - Phase 1 (https://github.com/wasp-lang/wasp/issues/530) ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @CamBlackwood, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
CamBlackwood
CamBlackwood2w ago
@kapa.ai how can I break it up into smaller jobs?
kapa.ai
kapa.ai2w ago
Let me check the knowledge base for you. One moment! :books: ---------------------- I'm sorry, but the provided knowledge sources do not contain specific information on how to break up a large job into smaller jobs in Wasp. However, generally, you can consider the following strategies: 1. Batch Processing: Instead of processing all the elements at once, you can process them in smaller batches. For each batch, you can create a separate job. 2. Queue: You can create a queue of jobs. Each job in the queue processes a single element or a small batch of elements. Once a job is completed, the next job in the queue is processed. 3. Parallel Processing: If the processing of each element is independent and does not depend on the results of other elements, you can create a separate job for each element and process them in parallel. Remember to handle errors and edge cases properly in each job to ensure the overall process is robust. For more specific guidance, you might want to refer to the Wasp documentation (https://wasp-lang.dev/docs) or ask in the Wasp community. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @CamBlackwood, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
CamBlackwood
CamBlackwood2w ago
@kapa.ai could you provide a code example of each of those?
kapa.ai
kapa.ai2w ago
Let me check the knowledge base for you. One moment! :books: ---------------------- I'm sorry, but the provided knowledge sources do not contain specific code examples for breaking up a large job into smaller jobs in Wasp. However, I can provide a general idea of how you might approach each strategy: 1. Batch Processing: Instead of processing all the elements at once, you can process them in smaller batches. For each batch, you can create a separate job. Here's a pseudo-code example:
const batchSize = 100; // Process 100 elements at a time
for (let i = 0; i < unverifiedUsers.length; i += batchSize) {
const batch = unverifiedUsers.slice(i, i + batchSize);
// Create a new job for each batch
createJob(batch);
}
const batchSize = 100; // Process 100 elements at a time
for (let i = 0; i < unverifiedUsers.length; i += batchSize) {
const batch = unverifiedUsers.slice(i, i + batchSize);
// Create a new job for each batch
createJob(batch);
}
2. Queue: You can create a queue of jobs. Each job in the queue processes a single element or a small batch of elements. Once a job is completed, the next job in the queue is processed. Here's a pseudo-code example:
const queue = new Queue();
unverifiedUsers.forEach(user => {
// Add each user to the queue
queue.add(() => processUser(user));
});
const queue = new Queue();
unverifiedUsers.forEach(user => {
// Add each user to the queue
queue.add(() => processUser(user));
});
3. Parallel Processing: If the processing of each element is independent and does not depend on the results of other elements, you can create a separate job for each element and process them in parallel. Here's a pseudo-code example:
unverifiedUsers.forEach(user => {
// Create a new job for each user
createJob(user);
});
unverifiedUsers.forEach(user => {
// Create a new job for each user
createJob(user);
});
Remember to handle errors and edge cases properly in each job to ensure the overall process is robust. For more specific guidance, you might want to refer to the Wasp documentation (https://wasp-lang.dev/docs) or ask in the Wasp community. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @CamBlackwood, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
CamBlackwood
CamBlackwood2w ago
@kapa.ai for parallel processing, it would still run in the same job though wouldn't it? Or are you saying that I should pull out api call data into it's own function and then call that in a foreach - isn't that essentially what I'm doing?
kapa.ai
kapa.ai2w ago
Let me check the knowledge base for you. One moment! :books: ---------------------- You're correct that in the parallel processing example, each iteration of the loop would still be part of the same job. The idea here is to start processing each user as soon as possible, rather than waiting for the previous user to finish processing. However, this would still be within the context of a single job. If you want to truly break up the work into separate jobs, you would need to create a new job for each user or batch of users. However, the provided knowledge sources do not contain specific information on how to create multiple jobs in Wasp. In your current code, you're processing each user sequentially within a single job. If the processing of one user fails or takes a long time, it could potentially block the processing of the other users. By breaking up the work into separate jobs or using parallel processing, you can avoid this issue. Again, for more specific guidance, you might want to refer to the Wasp documentation (https://wasp-lang.dev/docs) or ask in the Wasp community. ---------------------- Note: you can ask me a follow up question by @ mentioning me again :speech_balloon: ---------------------- Sorry @CamBlackwood, I seem to be having a bit of trouble. Don't worry though, I've pinged my support and they will try to fix me up! :technologist: Perhaps try again in 15mins?
CamBlackwood
CamBlackwood2w ago
Just in case anyone is following this - I’m gonna re-architect it I think so that it doesn’t fetch them all in one go as the list is only going to get larger
MEE6
MEE62w ago
Wohooo @CamBlackwood, you just became a Waspeteer level 11!
miho
miho2w ago
@CamBlackwood what did you end up doing? In general, how I approached this before is having a 1 min job that checks if there is some work to do and then it takes a chunk e.g. 50 items to process. And then when the jobs triggers next time, it takes another 50 and so on. This requires from you to have "state" info in the DB for each item: pending, processing and finished and of course - timestamps when these things happened so you know if you want to process it now or it's due later. Some of this can be we seen in the Mage codebase: https://github.com/wasp-lang/wasp/tree/main/mage/src/server/jobs
GitHub
wasp/mage/src/server/jobs at main · wasp-lang/wasp
The fastest way to develop full-stack web apps with React & Node.js. - wasp-lang/wasp
CamBlackwood
CamBlackwood2w ago
@miho In the end, I decided that actually it was more efficient to only call the API when necessary, so only when a user's profile is visited. Then, if it hasn't been updated in the past 24h, an API call is made again. I realised it would cost me a significant amount of money to do it the way I was trying (1000+ profiles, each day). Thanks for sending that through though, that makes sense as an approach. I think as my app grows, I'll need to do something like that so if I understand correctly, you essentially built a queueing system? It's a clever approach, I never considered that
miho
miho2w ago
you essentially built a queueing system
Yes, in a way I did. This prompts to think that maybe Wasp needs a built in queueing system of some kind. Hmmm, but it's hard to know when it's a good Wasp features vs. a good material for a tutorial 😄 Would you find that useful? Either the feature or the tutorial
CamBlackwood
CamBlackwood2w ago
Both would be really useful!
Want results from more Discord servers?
Add your server