Tensor
Currently in my nodejs backend I send request to my serverless endpoint like this
``js
console.log('Starting initial processing with RunPod API...');
const initialProcessingResponse = await retryWithBackoff(async () => {
return await axios.post(
${process.env.RUNPOD_RUNSYNC_ENDPOINT},
{
input: {
api: {
method: 'POST',
endpoint: '/bop',
},
payload: {
image: originalImageBase64,
scratch: true,
hr: true,
face_res: true,
cpu: false,
},
},
},
{
headers: { Authorization:
Bearer ${process.env.RUNPOD_API_TOKEN} },
}
);
}, MAX_RETRIES); however it seems to be very inconsistent sometimes it works without errors but without changing the backend or the docker (stable diffusion) it tends to fail randomly telling me this RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1.
Is there a way to force it run on 1 gpu or what do you guys suggest1 Reply
What do you mean, doesn't it Lways run on the gpu you choose in the endpoint
It's not your worker code here, I think it is a problem with your worker code