Brad Larson
MModular
•Created by staycia930 on 7/2/2024 in #questions
I do not know why the output is like this!
In the first, you've provided a type to the
name
argument (String
), whereas in the second you've left name
untyped. In the latter case, Mojo will default to object
for the argument in a def
function. This causes the slightly different printing behavior between the two types.3 replies
MModular
•Created by sa-code on 6/11/2024 in #questions
Importing package in test
There is an open feature request to at least address the need for the
-I .
import: https://github.com/modularml/mojo/issues/2916 , although the LSP issues wouldn't be covered by that.3 replies
MModular
•Created by noahlt on 6/8/2024 in #questions
list of pre-implemented models?
There are a few different ways to define a model for inference via MAX: in TorchScript, in ONNX, or construct in Mojo via the Graph API. We show several examples of TorchScript and ONNX models here: https://github.com/modularml/max/tree/main/examples/inference , which currently include BERT, Mistral 7B, ResNet-50, Stable Diffusion, and YOLOv8.
New in 24.4 are end-to-end pipelines that we've defined in Mojo and that use the MAX Graph API to construct the computational graph: https://github.com/modularml/max/tree/main/examples/graph-api/pipelines . We're referring to them as pipelines because the idea is that you can define all pre- and post-processing in Mojo as well (such as the tokenizer used in Llama 3) and easily incorporate them into a larger Mojo application. We've seeded this group with a few representative pipelines, and Llama 3 is the lead example among those.
We're extremely interested in having the community build upon these, as well as hearing what you'd like to see as additional examples, so please let us know how we can make this a better resource. We plan to regularly expand these examples.
3 replies