How can we harness GPU parallelism using Mojo?
I was wondering how I could use Mojo native or MAX engine to accelerate training models using TensorFlow or PyTorch. What are some good resources I can reference if I am interested in this?
1 Reply
Good question! as of today, v24.2, MAX supports serving of PyTorch, TF and ONNX models. Please check out https://docs.staging.modular.com/engine/get-started/ The GPU support is in the roadmap https://docs.staging.modular.com/engine/get-started/
Get started with MAX Engine | Modular Docs
Welcome to the MAX Engine setup guide!