MAX ⚡ with quantization and macOS support thread

Launch blog: https://www.modular.com/blog/max-24-4-introducing-quantization-apis-and-max-on-macos Post any issues you're having with the new release or macOS support here. You can get started installing here: https://modul.ar/install-max and running the new llama3 quantization example here: https://modul.ar/llama3
Modular: MAX 24.4 - Introducing Quantization APIs and MAX on macOS
We are building a next-generation AI developer platform for the world. Check out our latest post: MAX 24.4 - Introducing Quantization APIs and MAX on macOS
Install MAX | Modular Docs
Welcome to the MAX install guide!
GitHub
max/examples/graph-api/pipelines/llama3 at main · modularml/max
A collection of sample programs, notebooks, and tools which highlight the power of the MAX Platform - modularml/max
5 Replies
Martin Dudek
Martin Dudek4mo ago
Great Max finally arrived on Mac :mojo: Installation went without issues on a Mac Book Pro M2,Sonama 14.5 following the guide ... (conda, zsh)
Jack Clayton
Jack Clayton4mo ago
Great thanks for the report @Martin Dudek!
MRiabov
MRiabov4mo ago
If you are making changes to MAX, I also figure you might want to convince Google Cloud TPUs to support your framework when it's a little more polished - since you are some 5 times faster, it's a good deal for you since you get grand marketing and they get great speed to their framework.
Jack Clayton
Jack Clayton4mo ago
The highest priority right now is NVIDIA GPUs. I don't know what will be prioritized next, it could be TPUs, AMD GPUs, metal for Apple silicon etc. There's a lot of options!
NL
NL4mo ago
Max installation on Mac M2 and running llama3 in (q6_k and q4_k) was a breeze! Thank you Modular team! (p.s: can't wait for Max inference to extend to all cores on Apple Silicon and not just perf cores. 🏎️)
Want results from more Discord servers?
Add your server