fireice
fireice
RRunPod
Created by fireice on 11/14/2024 in #⚡|serverless
How to Get the Progress of the Processing job in serverless ?
with self.progress_bar(total=num_inference_steps) as progress_bar: for i, t in enumerate(timesteps): runpod.serverless.progress_update(job, f"Finished step {i + 1} / {len(timesteps)}") . Maybe I should put it here, at the end of each step execution?
10 replies
RRunPod
Created by fireice on 11/14/2024 in #⚡|serverless
How to Get the Progress of the Processing job in serverless ?
My requirement is to generate only one photo each time. For the progress updates, I need the system to send a progress update after each step during the generation of a single photo. If generating one photo takes 30 steps, I expect an update after each step so that the client can display the progress as N / 30.
10 replies
RRunPod
Created by fireice on 11/14/2024 in #⚡|serverless
How to Get the Progress of the Processing job in serverless ?
Can you supply a real project code ? Document is so simple, I can't follow it. In real project,I don't know how long each step will take. Code like " for update_number in range(0, 3): runpod.serverless.progress_update(job, f"Update {update_number}/3") " will not work .
10 replies
RRunPod
Created by fireice on 11/14/2024 in #⚡|serverless
How to Get the Progress of the Processing job in serverless ?
Thank you
10 replies
RRunPod
Created by fireice on 7/23/2024 in #⚡|serverless
Why "CUDA out of memory" Today ? Same image to generate portrait, yesterday is ok , today in not.
OK, I see, I will test.
47 replies
RRunPod
Created by fireice on 7/23/2024 in #⚡|serverless
Why "CUDA out of memory" Today ? Same image to generate portrait, yesterday is ok , today in not.
I am the developer. When I use my ai app, I get CUDA out of memory. I did nothing to the app.
47 replies
RRunPod
Created by fireice on 7/4/2024 in #⚡|serverless
Can I select the GPU type based on the base model in python script ?
I see, thanks
9 replies
RRunPod
Created by fireice on 7/4/2024 in #⚡|serverless
Can I select the GPU type based on the base model in python script ?
No description
9 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
no other scripts,. runpod.serverless.start({"handler": handler}) in handler.py
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Should I install requirements.txt in my prod env? or ,when I use 'runpodctl project deploy' , there dependencies will be auto installed on prod env ?
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
xformers latest is 0.0.26,it need torch 2.3.0, but now I catch problem when i use torch2.3.0.I am checking now.
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Ok, I will check code if or not need xfomers.
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Thanks, I will check to delete xfomer or not.
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Dependencies are listed in the requirements.txt file,When running runpodctl project dev, don't these dependencies get automatically downloaded and installed?
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
ok , I will check.
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Traceback (most recent call last): 2024-06-05T09:04:11.388043478Z File "/runpod-volume/55fd91b5/prod/instantid/src/handler.py", line 14, in <module> 2024-06-05T09:04:11.388417474Z import diffusers 2024-06-05T09:04:11.388430521Z ModuleNotFoundError: No module named 'diffusers'. I confirm that diffusers 0.27.0 is already installed. Why does this issue always occur in serverless?
23 replies
RRunPod
Created by fireice on 6/5/2024 in #⛅|pods
Can I use torch2.3.0 + cuda 11.8 on Runpod?
Because xformers 0.0.26 (RECOMMENDED, linux & win) Install latest stable with pip: Requires PyTorch 2.3.0. I am testing now.
23 replies
RRunPod
Created by fireice on 5/22/2024 in #⚡|serverless
timeout in javascript sdk not work
I see, in index.ts, import { curry, clamp, isNil } from "ramda",but I did not install ramda before. So run = curry () did not work.Now I works.
16 replies
RRunPod
Created by fireice on 5/22/2024 in #⚡|serverless
timeout in javascript sdk not work
16 replies
RRunPod
Created by fireice on 5/22/2024 in #⚡|serverless
timeout in javascript sdk not work
just now,I set default number 3000 to 300000, not work,still expeed 3000
16 replies