R
RunPod3mo ago
suhail

issue with websocket (wss) port on runpods

hi there! i’m working on a real-time sdxl example and have tried several times, but for some reason, the websocket (wss) port never works with runpods. it works fine with http, but not with wss. any help would be appreciated!
15 Replies
Madiator2011 (Work)
Move port from http to tcp as cloudflare proxy not support websocket
suhail
suhailOP3mo ago
can you please elaborate?
Madiator2011 (Work)
move your app port from http ports to tcp ports in your template
suhail
suhailOP3mo ago
yep, i did: the flask server is running but still i can't connect to wss url here is my code for a simple wss server:
from flask import Flask
from flask_socketio import SocketIO, emit
from diffusers import AutoPipelineForText2Image
import torch
from io import BytesIO
import base64
import time

app = Flask(__name__)
socketio = SocketIO(app)


print("Loading the SDXL Turbo model... This might take a moment.")
model = AutoPipelineForText2Image.from_pretrained(
"stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16"
)
model.to("cuda" if torch.cuda.is_available() else "cpu")
print("Model loaded! Ready to generate images.")

def image_to_base64(image):
buffered = BytesIO()
image.save(buffered, format="PNG")
return base64.b64encode(buffered.getvalue()).decode('utf-8')

@socketio.on('generate_image')
def handle_generate_image(data):
prompt = data.get('prompt', '')
if not prompt:
emit('error', {'message': 'No prompt provided'})
return
start_time = time.time()
image = model(prompt=prompt, num_inference_steps=1).images[0]
time_taken = time.time() - start_time
image_base64 = image_to_base64(image)
emit('image_generated', {'image_base64': image_base64, 'time_taken': f"{time_taken:.2f} seconds"})

if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, debug=True)
from flask import Flask
from flask_socketio import SocketIO, emit
from diffusers import AutoPipelineForText2Image
import torch
from io import BytesIO
import base64
import time

app = Flask(__name__)
socketio = SocketIO(app)


print("Loading the SDXL Turbo model... This might take a moment.")
model = AutoPipelineForText2Image.from_pretrained(
"stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16"
)
model.to("cuda" if torch.cuda.is_available() else "cpu")
print("Model loaded! Ready to generate images.")

def image_to_base64(image):
buffered = BytesIO()
image.save(buffered, format="PNG")
return base64.b64encode(buffered.getvalue()).decode('utf-8')

@socketio.on('generate_image')
def handle_generate_image(data):
prompt = data.get('prompt', '')
if not prompt:
emit('error', {'message': 'No prompt provided'})
return
start_time = time.time()
image = model(prompt=prompt, num_inference_steps=1).images[0]
time_taken = time.time() - start_time
image_base64 = image_to_base64(image)
emit('image_generated', {'image_base64': image_base64, 'time_taken': f"{time_taken:.2f} seconds"})

if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, debug=True)
No description
Madiator2011 (Work)
you need to go to connect button and tcp maping and go to ip:port
suhail
suhailOP3mo ago
thanks, found the ip and port but it doesn't connect via the ws.
wscat -c ws://141.193.30.26:40274


error: Unexpected server response: 400
wscat -c ws://141.193.30.26:40274


error: Unexpected server response: 400
and i don't see any error from the flask server btw, the app works fine with google cloud server, not sure what could be the issue with runpods gpu, or i might be missing something
Madiator2011 (Work)
websocket over tcp should work fine
Encyrption
Encyrption3mo ago
I found for websocket when using the RunPod proxy HTTP port you have to construct your wss URL like this:
WS_URL = f'wss://{POD_ID}-{WS_PORT}.proxy.runpod.net/ws'
WS_URL = f'wss://{POD_ID}-{WS_PORT}.proxy.runpod.net/ws'
PLEASE note the /ws on the end. Without it this will not work. If you are using a TCP port then you will have to provide SSL and DNS for your worker to get the benefits of the underlying HTTPS required for wss.
Madiator2011 (Work)
@Encyrption hah I did not know about this one nice find
Encyrption
Encyrption3mo ago
consider it my going away gift... LOL
Madiator2011
Madiator20113mo ago
going away?
Encyrption
Encyrption3mo ago
Yeah, it seems RunPod cannot support the application I have been building. I am building an AI marketplace where users can come an run many models. Seems with RunPod I can either have a small amount of fast responding endpoints or I can have a lot of slow responding endpoints. I don't see a path to success with those limitations. 😦
Madiator2011
Madiator20113mo ago
Hmm how so? Fell free to send me some details
Encyrption
Encyrption3mo ago
Sure, should I send via DM?
Madiator2011
Madiator20113mo ago
ok
Want results from more Discord servers?
Add your server