Seeking Clarification on Current and Future Tensor Library Support in Mojo
I wonder if someone can clarify the current state and future direction of a Tensor library for Mojo.
I understand that Tensor won't stay in the standard library and i think to understand the rationale behind it. We have also NuMojo, which looks very promising. It is currently based on Tensor but aims to have "Native array types" as a long-term goal.
Not clear about the situation, I implemented my own vectorized yet simple Vector and Matrix structs for my KAN experiments. They work, my KAN implementaton outperforms the Python implementation i ported to Mojo (which is numpy based). However, I just found out these are extremely slow compared to torch.matmul, etc.
I would greatly appreciate it if someone could clarify what is possible right now for Tensor-based applications and where Mojo is headed in that regard (let's say within this year).
To be clear, I'm not demanding anything of course β just looking for some clarification on what is possible right now and where we are going as community. Also feedback how others handle the current situation would be great.
Thx π
15 Replies
@Martin Dudek I am not sure about the future of Tensor Library once Modular open sources it, so that's there. Since I am part of the NuMojo team, I will try my best to give you a reply based on our current work on NuMojo and the direction it is heading towards.
1) As you pointed out, the main branch of NuMojo is based on Tensors. But we are currently porting all the existing functionality and much more from numpy to our own Array type. Something like
numojo.array(), numojo.ndarray()
(we try to stick with numpy conventions as much as possible to make it easier to switch) will be made available very soon.
2) Although we will release our own array
type, math functions, array creation and manipulation modules and more soon (which should be enough for a lot of basic functionalities), it'll take some time to build a full linear algebra library etc as we want to make sure our building block Array
has enough basic functionalities before moving on.
3) Recently, Another community member has joined our "Mojo Numerics & Algorithsm Group" and is contributing to MojoSci which will include a lot of higher order functionalities like SciPy. You can find it here. (https://github.com/Mojo-Numerics-and-Algorithms-group/MojoSci).
Future looks bright for Mojo and NuMojo:mojo:. Please feel free to ask any questions here or in DM. Cheers πͺThis sounds fantastic, and surely will be a big contribution for the Mojo community, thank you so much. :mojo: π
I don't know if this comparison makes sense at all, so i want to ask you. I compared my simple matmul implementation with the following pytorch based one, and the pytorch one is nearly 100 times faster. which came a bit of a shock. Have you done any comparison with pytorch or equivalent performant libs? Is this at all a realistic comparison. I have no clue about compiler/performance/optimization possibilities ...
thanks a lot
@Martin Dudek Sorry for my delay in reply.
For anyone looking at this, I am still learning Mojo and therefore if any of the methods used for benchmarking is not right, please kindly let me know, I would be glad to learn.
1) If I remember correctly, using time() to benchmark pytorch calc is slightly inaccurate, so it's better to use pytorch benchmark utils. (You can read more about it here - https://pytorch.org/tutorials/recipes/recipes/benchmark.html)
2) Currently Mojo benchmark module for our matmul keeps crashing (it works randomly, but it's like playing russian roulette and is not reliable rn) and therefore we aren't able to use that, Instead I used the time() module in mojo (which won't be accurate as doing benchmark, but let's run with it for now and do average). You can find our benchmarks for other math funcs implemented here - https://github.com/Mojo-Numerics-and-Algorithms-group/NuMojo-Examples-and-Benchmarks/blob/main/bench_vect.txt.
3) These benchmarks are usually difficult to interpret without knowing what's going on inside the blackbox, so please take all benchmarks with a grain of NaCl. All these benchmarks were ran in my Macbook M2 - 16GB RAM.
Getting those stuff out of the way, let's get to benchmarks. I am performing
matmul
operation on Tensors/array of size NxN as you have mentioned it. x axis of all the plots correspond to the value of N
and y axis corresponds to the mean time value. I have shown Torch("cpu") and Torch("mps") and NuMojo (It's technically cpu as Mojo doesn't have GPU support yet). I will mainly compare NuMojo and Torch("cpu") to make apple to apple comparison of CPU performance and give Torch("mps") as only a reference.Please refer to plots attached below few messages.
1) In the float16 plot, NuMojo implementation is on par and ever slightly so slightly faster than Torch("CPU").
2) In the float32 and float64, Torch performs better than current NuMojo implementation. Especially in float32 since I think Torch is well optimized for that.
There's still a lot of compile time optimizations and tricks we can do in Mojo to reduce overhead, we aim to take advantage of these and slowly optimize all our math implementations step by step. As of now, NuMojo in its early infancy is on par or performs ever so slightly better than Torch("cpu") in some cases and in other cases, it is only roughly an order of magnitude or so slower than Torch in some cases. With Mojo GPU support in future, better compile time optimizations and some tricks, we can hope for more improvements and close in on these differences and improve.
NOTE:
1) There are small fluctuations in mean time value in every run, so I have plotted these multiple times and irrespective of fluctuations, the Behaviour stays constant. 2) There are parameters in the matmul that can affect the time such as unroll factor, parallelize size etc. I have fixed certain set of values and went along, I haven't tried optimizing these values. These plots could scale differently in different systems depending on the parameters. 3) I honestly don't know if torch.float and DType.float are similar in representation, so it might not exactly be an apple to apple comparision. If anyone knows more details about this, please do share. "Update: There was error in above plots, so I have updated this text to reflect the correct plots attached below." The code I used for Pytorch benchmark is the following: and for NuMojo is the following, Hope that helps! Cheers fellow mojicians πͺ
1) There are small fluctuations in mean time value in every run, so I have plotted these multiple times and irrespective of fluctuations, the Behaviour stays constant. 2) There are parameters in the matmul that can affect the time such as unroll factor, parallelize size etc. I have fixed certain set of values and went along, I haven't tried optimizing these values. These plots could scale differently in different systems depending on the parameters. 3) I honestly don't know if torch.float and DType.float are similar in representation, so it might not exactly be an apple to apple comparision. If anyone knows more details about this, please do share. "Update: There was error in above plots, so I have updated this text to reflect the correct plots attached below." The code I used for Pytorch benchmark is the following: and for NuMojo is the following, Hope that helps! Cheers fellow mojicians πͺ
Wow, thank you so much for all your efforts , this looks all very promising. I hope others also see these results.
Right now
numojo.array
is not in the repo yet (correct me if i am wrong), any plans when you guys might make it available?
I was honestly getting a bit unsure if Mojo is right language for me right now for the projects i want to implement (like KANs), but seeing this makes me looking at Mojo in a more positive light again Thanks a lot for that :mojo:@Martin Dudek you are right, we havenβt released the array (NDArray) yet as we are polishing it and doing tests to clear out edge cases. We will be releasing it soon (pretty exciting! π) along with many other functionalities and get some community feedback. Looking forward to see Mojo and NuMojo evolve! π₯.
@Martin Dudek
Update: As expected (I fell for my own trap xD, gotta be careful when benchmarking), I made a mistake in parallelization which leads to the reduction in time for larger sizes. I am adding the new plots, please find it here. I will update the text above in accordance with new plots.
thanks a lot for the update, really looking forward to learn how you guys implemented these functions to be so close to torch.cpu . :mojo:
Thank you @Martin Dudek. We are still implementing mostly basic routines available in Mojo which can be optimized better I guess, So Mojo is doing the heavy lifting. Slowly, but surely we will close in on these differences for Matmul especially for float32, float64 operations and hopefully try to be faster someday.
Other element wise operations such as add, sub, mul, exp etc are already on par or sometimes faster compared to Torch operations as you can see in the attached plot (I was careful not to make benchmarking mistake again xD) where I measured
torch.mul
vs numojo.mul
for different NxN size of Arrays/Tensors with same specifications. The solid lines show float16, dashed shows float32, dotted shows float64 comparision for Torch("cpu") and NuMojo. All other math operations show similar trends too.
Note for anyone looking at the following graph: I forgot to change plot title, it should be mul
and not matmul
.I am sure the community with its Mojo Gurus will be able to give you valuable feedback once it is published. What you are implementing is just so central for many applications π
Thanks for keeping me updated here but please dont take too much time for that. I am hooked anyway π
so slightly related and I canβt give more info, but specific to matmul there is some news coming next month that will definitely interest you
expect more info july 1
mind sharing the code for how you generated this graph?
Hi @benny for benchmarking, itβs the same as that I have shared above except for the matplotlib code. Do you want the plotting part of the code?
The plotting part and where are you getting the matmul function?
@benny , the following is the plotting code
where the
time_mojo
is generated from the mojo benchmark code above. time_torch
is from torch running in the same .py file. As for the matmul, we are using almost same implementation as Modular one (https://docs.modular.com/mojo/notebooks/Matmul), except for the changes in store[], load[] methods where we use single index to reduce some overhead instead of the two index used in modular version.
ah now I see the confusion! I'm so sorry @benny. The above graph is for torch.mul
and not torch.matmul
, I think the label wasn't changed while changing graph from torch.matmul
.
these are the plots corresponding to matmul.got it, I see
thanks for clarifying ππ»