Nano
Trouble with SSH via PuTTY
I want to use putty for X11 Forwarding. I asked ChatGPT for help figuring this out, showing it the ssh connection popup.
1. I took the ed25519 private key I had generated from before (the public key of which I already put on my profile)
2. Used puttygen to convert it to a ppk, then went back to PuTTY and stuck that ppk in SSH -> Auth -> Credentials
3. Wrote in 194.26.196.142 for the IP addresss and Port 22
And then I tried to open up a terminal and it asked for a 'login as: ". I wasn't sure, so I just wrote in root, and then it said 'Server refused our key', and then asked for '[email protected]'s password: ' so I wrote in the one for my private key but it still didn't work.
Anyone got any clue where I'm going wrong?
4 replies
Running LLaMA remotely from a Python script
Hi, new here, relatively new to LLMs as well. VERY new to remote computing.
I've got a llama-cpp python script that I've been tinkering with on my laptop, and I'd like rent a remote GPU to speed up text generation.
I haven't found a tutorial on how to do what I really want. I'd like to be able to simply hit 'run' from vscode and have the LLM set up on a remote GPU - so that I can send my prompts over, and receive back the generated text.
I'm testing a system with an untraditional prompting system, so I can't just use an existing webUI, and I'd prefer to develop from my IDE than on Jupyter. Anyone got any tips or could point me in the right direction?
24 replies