Google TPUs with Mojo language?
Sup,
I'm creating a resource-intensive AI (at least 3-4x more resource-intensive than LLMs), so I'm considering a stack which would allow me fast computation cheap (while making dev relatively easy).
Would using JAX and TPUs not work with Mojo? As far as I see, using TPUs is the absolute fastest we can get for deep learning. (or is it? Would MAX inference engine be faster?)
Thanks everyone!
7 Replies
TPUs are rent only so I suspect any DL framework other than Google's should support that. Can't trust google.
@MRiabov ,
That is a super good question and it probably has a positive answer,
see https://www.modular.com/blog/max-is-here-what-does-that-mean-for-mojo
Wait for an answer of an official modular person,
pretty sure they'll help you archieve your goal in a very performant manner.
(ping to @Jack Clayton )
Mojo was created to solve theses deployment challenges,
Max is probably state of the art when it comes to ease of use and performance.
Modular: MAX is here! What does that mean for Mojo🔥?
We are building a next-generation AI developer platform for the world. Check out our latest post: MAX is here! What does that mean for Mojo🔥?
Congrats @rd4com, you just advanced to level 16!
Hey yeah TPU support will come at some point after NVIDIA GPUs, MLIR was built by various Modular staff and others to support more exotic hardware like TPUs at Google, and it's the foundation of Mojo. From what I understand JAX is the best option for TPUs right now.
and why would it even matter if mojo is a standalone compiler which calls python, which calls Jax which is C++?
I mean, if Jax is a python wrapper over a default C++, and we are calling it separately from mojo, why would we bother with Mojo not having TPU support if the Jax does through the openXLA?
Congrats @MRiabov, you just advanced to level 5!
That’s what I’m saying, you should use JAX at the moment if you’re targeting TPU support and need to get something done now. TPUs support for Mojo will come later