Xcur
MModular
•Created by Kish on 1/17/2024 in #questions
use of unknown declaration 'Python'
i think you just misspelled it it's supposed to be
let py = Python.import_module("builtins")
7 replies
MModular
•Created by Dania on 12/25/2023 in #questions
Why doesn't __bool__ allow explicit conversion?
Ahhh okay I think it’s because traits aren’t complete yet and they don’t have a boolable trait. They talked about doing implicit conformance for traits in the newest newsletter, so my guess is in the next major release bool(a) will work
5 replies
MModular
•Created by Dania on 12/25/2023 in #questions
Why doesn't __bool__ allow explicit conversion?
Try doing bool(a) instead with a lower case b. Bool should be a SIMD type im pretty sure
5 replies
MModular
•Created by chebu_pizza on 12/13/2023 in #questions
Is it possible to push mojo code on GitHub?
I am not 100% sure about this point. It looks like they don't include it in the file count for merging it, but they may recognize it for syntax highlighting still. A better person to answer this is someone at modular. Linguist had a release yesterday, so when it reaches 2k files unfortunately it will be a few more months until they have another release and merge the mojo grammar files to it
16 replies
MModular
•Created by Xcur on 12/8/2023 in #questions
Can i have a list of structs with different parameters?
I've been attempting to solve this, but I'm having difficulty abstracting it away into a wrapper struct using traits. Here is the wrapper struct and sample code to try to run it that I added:
Can anyone help me with trying to solve this? I want to have a DynamicVector with different size Name structs.
2 replies
MModular
•Created by Hasan Yousef on 11/18/2023 in #questions
Running code at GPU
Seems like it will be supported most likely in "Q1 2024" https://discord.com/channels/1087530497313357884/1098713601386233997/1181299299976491059
14 replies
MModular
•Created by Kyle Hassold on 10/28/2023 in #questions
Fastest Matrix Multiplication
Great, I have a much better understanding about the AI engine thanks again! Looking forward to hearing more about it from ModCon videos
20 replies
MModular
•Created by Kyle Hassold on 10/28/2023 in #questions
Fastest Matrix Multiplication
That is very cool! Thank you so much for your insight on the AI engine, I just want to make sure I understand this all correctly. So functions written in mojo parse static graph IR's and instead of running kernels written in C++ by onnxruntime or libtorch to run the operators, kernels written in mojo are used. This mojo written code is compiled into MLIR, and at this level is where a lot of the magic of the AI engine is where it is able to perform various optimizations like automatic operator fusion. Afterwards the code is compiled into machine code. Is this correct?
On a side note, would be really awesome if in the future the AI engine provided a framework for users to write their own custom operators in mojo. That would then allow them to take advantage of these AI engine optimizations like automatic fusion.
20 replies
MModular
•Created by Kyle Hassold on 10/28/2023 in #questions
Fastest Matrix Multiplication
Wow that is great to hear! I know AI engine is the product but I'm glad mojo features aren't being restricted, allowing someone to write their own operators with similar performance. I asked a similar question in a different channel, but to achieve this operator fusion is the AI engine parsing textual static graph IR's created by say like onnx or torchscript? Then fusing operators it sees like matmul and relu, and instead of using libtorch and onnxruntime to run these operators it would use a matmul_relu function written in mojo?
20 replies
MModular
•Created by Kyle Hassold on 10/28/2023 in #questions
Fastest Matrix Multiplication
Does this mean using the current mojo version someone could write this fast matrix multiplication? Or are there additional mojo decorators or features that would allow this that can only be used internally?
20 replies