InnerSun
InnerSun
Explore posts from servers
RRunPod
Created by InnerSun on 8/17/2024 in #⛅|pods
Official Template not running correct version of CUDA
Hello ! I'm trying to run a pod using the official templates : runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04 runpod/pytorch:2.0.1-py3.10-cuda11.8.0-devel-ubuntu22.04 Unless I completely misunderstood the notation, the image should run with cuda11.8.0 right? I've tried with Secure Cloud RTX 4090 and Secure Cloud RTX Ada 6000 All of them start with
2024-08-17T12:23:11.943448391Z ==========
2024-08-17T12:23:11.943453191Z == CUDA ==
2024-08-17T12:23:11.943456021Z ==========
2024-08-17T12:23:11.959357698Z
2024-08-17T12:23:11.959372989Z CUDA Version 11.8.0
2024-08-17T12:23:11.943448391Z ==========
2024-08-17T12:23:11.943453191Z == CUDA ==
2024-08-17T12:23:11.943456021Z ==========
2024-08-17T12:23:11.959357698Z
2024-08-17T12:23:11.959372989Z CUDA Version 11.8.0
However I noticed when running nvidia-smi that the CUDA version is incorrect.
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 Off | Off |
| 0% 29C P8 17W / 450W | 3MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 Off | Off |
| 0% 29C P8 17W / 450W | 3MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
Any idea what's wrong, and is there a workaround ?
5 replies
SSolidJS
Created by InnerSun on 2/21/2024 in #support
Export Vercel Functions config on @solidjs/[email protected]
Hello ! I've managed to update my 0.3.0 project to 0.5.9 by scooping info here and there on the Github commits & issues. However I'm having an issue with API routes when they are deployed on Vercel Functions. I've got API routes that trigger a long call that builds a PDF on the node server. - test-export: bare bones export that creates a 256px image and exports it as a PDF using jsPDF - high-res-export: complex export that assembles large pages and exports them as a PDF using jsPDF Local Dev : everything works fine, the test-export and high-res-export work Vercel Deployment : the test-export works, but the high-res-export times out at 15 sec (FUNCTION_INVOCATION_TIMEOUT), the default timeout for a Pro subscription The thing is, I can't find a way to bump the maxDuration setting of my API routes on Vercel https://vercel.com/docs/functions/configuring-functions/duration - Exporting a config object or a maxDuration variable doesn't work (it is ignored by Vercel as far as I can see) - Configuring a vercel.json file doesn't work because the API routes are not defined in a api/ folder at the root of the project. The Vercel build fails with :
The pattern "api/**/*" defined in `functions`
doesn't match any Serverless Functions inside the `api` directory.
The pattern "api/**/*" defined in `functions`
doesn't match any Serverless Functions inside the `api` directory.
1 replies