Unable to use PyTorch from Mojo

I just wanted to try Pytorch and created a project using nightly build. I then installed PyTorch using: magic add "pytorch". This is the code I tried: from python import Python fn main() raises: torch = Python.import_module("torch") x = torch.tensor([1, 2, 3]) print(x) It resulted in the following error: Unhandled exception caught during execution: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory mojo: error: execution exited with a non-zero result: 1 I can see "libmkl_intel_lp64.so.2" in the .magic\envs\default\lib folder so I do not understand what the problem is.
21 Replies
Darkmatter
Darkmatter2mo ago
Did you pull in Intel Python? Pytorch usually links to OpenBLAS.
Gennadiy
GennadiyOP2mo ago
How do I do that?
Darkmatter
Darkmatter2mo ago
What does your pixi.toml/mojoproject.toml file look like?
Gennadiy
GennadiyOP2mo ago
This the content of momproject.toml: [project] channels = ["conda-forge", "https://conda.modular.com/max-nightly"] description = "Add a short description here" name = "mojo_by_example-nightly" platforms = ["linux-64"] version = "0.1.0" [tasks] [dependencies] max = ">=25.1.0.dev2024121305,<26" numpy = ">=1.26.4,<2" jax = ">=0.4.35,<0.5" pydicom = ">=3.0.1,<4" torchvision = ">=0.20.1,<0.21" pytorch = ">=2.5.1,<3"
Darkmatter
Darkmatter2mo ago
And you are inside of the magic environment when compiling and running?
Gennadiy
GennadiyOP2mo ago
Yes, I made sure to double-check.
Darkmatter
Darkmatter2mo ago
One second, let me run it locally. Ok, so I have oneAPI installed and it picked it up as my system BLAS. Let me try that in a container. I can reproduce it, so it's a Mojo issue. @Gennadiy It's probably easiest to add the mkl package to get the dependency while we sort out what exactly pytorch did, since it seems like this is their fault.
Gennadiy
GennadiyOP2mo ago
Thanks. I will read through that page.
Darkmatter
Darkmatter2mo ago
As a note, MKL is quite a bit faster than OpenBLAS, which is what pytorch and numpy used before. So this is an upgrade, just one that appears to have had some consequences for the ecosystem.
Gennadiy
GennadiyOP2mo ago
Is there any performance-related issue given that I am on an AMD system?
Darkmatter
Darkmatter2mo ago
What generation?
Gennadiy
GennadiyOP2mo ago
It's an AMD Ryzen 9 4900HS.
Darkmatter
Darkmatter2mo ago
MKL's alleged "crippling of AMD CPUs" was actually around servers that didn't support AVX-512, and MKL only has AVX-512 and scalar fallbacks, but Zen 4 and 5 support AVX-512 and see the same perf as Intel CPUs. That one is probably going to be a bit worse than openblas, since iirc it has AVX2. But, this is not going to be a gigantic difference for most workloads unless you are doing AI on the CPU.
Gennadiy
GennadiyOP2mo ago
Ah, that;'s good to know. One last question, can I add the dependency for openblas using magic and how do I do it? I am still figuring my way around mojo.
ModularBot
ModularBot2mo ago
Congrats @Gennadiy, you just advanced to level 2!
Darkmatter
Darkmatter2mo ago
The basic syntax is in that docs page I sent. How exactly pixi, the underlying tool for magic, decided to translate that to toml I don't know, you may have to play around a bit.
Gennadiy
GennadiyOP2mo ago
OK, thanks. I will have a play around now.
Balderdash
Balderdash2mo ago
@Gennadiy I have the same issue... did you find out how to add mkl using magic? It seems that I could solve it by setting LD_LIBRARY_PATH to .magic/envs/default/lib - export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:.magic/envs/default/lib
Gennadiy
GennadiyOP2mo ago
That's great. I solved the PyTorch issue by adding it to the channels array in mojoproject.toml like so: channels = ["pytorch", "conda-forge", "https://conda.modular.com/max"]
kirby
kirby2mo ago
I had the same issue importing PyTorch in Mojo, and it turned out the real cause was the difference in conda channels. When I searched PyTorch on conda-forge, it showed a cpu_generic build with nomkl (which uses OpenBLAS). On the pytorch channel, I got a CUDA build that explicitly depends on MKL (libmkl_intel_lp64.so.2). Because of that mismatch, my environment ended up pulling in the MKL-based build even though I expected a CPU or OpenBLAS-only version. Once I made sure the channel pinned the correct PyTorch build—and either installed MKL or switched to the nomkl build—the missing-library error went away. For anyone troubleshooting similar issues, you can see the list of available PyTorch packages in a channel using commands like: magic search pytorch --channel pytorch magic search pytorch --channel conda-forge This will show the version, build string, and dependencies, which can help identify whether the package requires MKL, CUDA, or other specific libraries.

Did you find this page helpful?