R
RunPod9mo ago
Adam?

is there any method to deploy bert architecture models serverlessly?

is there any method to deploy bert architecture models serverlessly?
Solution:
@Adam? https://www.runpod.io/console/explore then select this...
No description
Jump to solution
9 Replies
nerdylive
nerdylive8mo ago
Hi There is using huggingface's transformers you can cache the model first in some path, then load it using relative / absolute path. put the model in the image / use a network volume
Adam?
Adam?OP8mo ago
Hi, thanks for help Is there any template for this? Or I should write the handler by myself?
nerdylive
nerdylive8mo ago
yes there is use VLLM template
Solution
nerdylive
nerdylive8mo ago
nerdylive
nerdylive8mo ago
If you need help with configuring that template theres a setup menu after you click on it
Adam?
Adam?OP8mo ago
Yeah I have try this
nerdylive
nerdylive8mo ago
oh wait bert isnt compatible with that isnt it?
Adam?
Adam?OP8mo ago
But it doesn't supports bart models Exactly
nerdylive
nerdylive8mo ago
Hmm then you'll have to write an custom handler first for beert using transformers pipeline works too
Want results from more Discord servers?
Add your server