Wish to split model files with docker, but it slows down significantly when using storage
I want to split model files with docker.
The model files are getting bloated, I tried to store the model files to storage but found that the inference time grows a lot, is there a way around this?
0 Replies