MAX tutorials community feedback and questions
Leave any feedback, questions, or problems you're running into here for the new MAX tutorials page: https://docs.modular.com/max/tutorials
MAX Tutorials | Modular Docs
Step-by-step programming guides using MAX APIs
7 Replies
The first one seems to be the most involved. I'm looking for a "Hello World!" and I get Kubernetes and Helm and "these four utilities" installed. I know that the ADD tutorial requires nightly/max but in the long term, feels like it should be rearranged to
- Getting started with MAX Graph
- Run an ONNX model with Python
- Create a custom op for an ONNX model
- Deploy a model with AWS CloudFormation
- Deploy a model with Kubernetes and Helm.
It's not just about complexity. It's that the first three are all Modular and "you can do it all with our stuff alone". But, if you want to deploy to AWS, we got that too.
I second @Darin Simmons feedback, the order within the tutorial seems mixed up.
One thing I feel is missing in the MAX docu, for people coming from having some grasp of what is possible with Mojo, is an initial explanation what MAX offers that can't be achieved equally with pure Mojo. After all there is Basalt in pure Mojo, there is llama2.mojo. While it of course becomes clearer once you learn about MAX, I think a few words about this would help many people get motivated to learn about MAX. I assume many come to MAX after having dived into Mojo for a while. Of course i might just have missed to see this part in the docu ...
One thing I feel is missing in the MAX docu, for people coming from having some grasp of what is possible with Mojo, is an initial explanation what MAX offers that can't be achieved equally with pure Mojo. After all there is Basalt in pure Mojo, there is llama2.mojo. While it of course becomes clearer once you learn about MAX, I think a few words about this would help many people get motivated to learn about MAX. I assume many come to MAX after having dived into Mojo for a while. Of course i might just have missed to see this part in the docu ...
Hey @Darin Simmons and @Martin Dudek! Thanks for sharing your thoughts! I'm one of the tech writers who helped write some of those tutorials. I agree--I think we could do more to help organize the tutorials better. (I know I was probably a little more focused on writing the content than organizing it!) I'll have a chat with folks and see what we can do!
Please keep the feedback coming! We really appreciate it!
I’d want to start with “get a functioning docker container for a production deployment” then move on to k8s. We should be encouraging people to deploy in docker/podman anyway even if they aren’t using k8s.
I agree with part of your statement regarding a functioning docker container. There's a working mojo docker container even if it is in /examples/docker. There is nothing similar in modular/max.
@aikidave It's waaaaay slick to have but to DarkMatter's point... What if the reader wants to deploy to Docker but not K8s (or AWS version of K8s) . I'm primarily thinking about the university types that typically have on-prem campus compute and fixed grant budgets. Or, simply the researcher who doesn't have the time to faff about and just wants to download a container on his local computer and try it out. Feels like a more intermediate tutorial possibly.
@aikidave It's waaaaay slick to have but to DarkMatter's point... What if the reader wants to deploy to Docker but not K8s (or AWS version of K8s) . I'm primarily thinking about the university types that typically have on-prem campus compute and fixed grant budgets. Or, simply the researcher who doesn't have the time to faff about and just wants to download a container on his local computer and try it out. Feels like a more intermediate tutorial possibly.
Lots of companies also can’t be bothered with k8s and either deploy directly on a host or use something like AWS ECS.
My research cluster actually has OpenStack on it, which is more common because it’s better than normal k8s at tolerating the various abstraction levels various department members want to work at (cloud native containers vs handing out whole systems).
However, making an OpenStack tutorial is a fool’s errand since everyone who deploys it customizes it heavily.
Again, I can't tell you how much I appreciate this feedback. I'm curious: I've been working on a tutorial that I think might address what you're talking about. At least, it'll be a step in the right direction!
FWIW, I'm pretty new to all things AI, so I appreciate the insights you're sharing!