R
RunPod2mo ago
obi1

google colab image

I used the colab image available at us-docker.pkg.dev/colab-images/public/runtime:latest , the image works and gives the following logs , I added the port 9000 to the http port to expose in the pod settings , but it shows on the dialog after clicking on connect that the http service is not ready yet
No description
4 Replies
Madiator2011
Madiator20112mo ago
It’s cause docker image firm colab is made to listen only on localhost
obi1
obi12mo ago
@Papa Madiator so there is no way of using the colab image on runpod??
Madiator2011
Madiator20112mo ago
Why would you want to use it? It’s just running Jupiter lab nothing else
obi1
obi12mo ago
it's just that it comes packed with most of packages I need, or do you suggest that I just write a bach script and install them that way using the standard the image provided by runpod
Want results from more Discord servers?
Add your server
More Posts
Model loadtime affected if PODs are running on the same serverI was trying to debug the latency on my test PODs and now I figured that PODs running on the same phSwitch off pod after 2 hoursHello, I'm new with runpod, It seems like I didn't turn off my pod and it used up all my credit. Hohow to expose my own http port and keep the custom HTTP response?I want to use my own function and image directly. But I cannot find any guides about how to define aCannot open 7860 port with Oobabooga Text Generation WebUI templateI cannot open 7860 port with Oobabooga Text Generation WebUI template.Need password when connect to pod using SSHwhen I create a pod I try to connect to it using SSH, I follow the tutorial in the site Doc, when I confusing serverless endpoint issueAfter a successful call through run or runsync, i get my handler's success json. after about 5 seco"Error saving template. Please contact support or try again later." when using vLLM Quick DeployI managed to launch two endpoints successfully. The third endpoint displays the error above when I cjupyter notebookWill connecting to the port of GPU instance show the same progress and jupyter notebook I am runningSuper slow network speeds on some pods.Some pods have really really slow network speeds and take an absolute age to install requirements inWhy is it considered that it is always in the queue state in serveless and cannot be executed?The task has been stuck in the queue. The serverless Endpoint id is 9ufpu7wjug1mqc and the task id i