M
Modular8mo ago
Sagi

math.atan funtion not working on compile time

is that okay that the atan funtion not working at compile time? when I try alias a = math.atan(Float32(3)) if fails with the error "unknown external call atanf" is it spouse to be like that? and if yes, is there a way around that? I am trying a tylor series solution but it seems that the math pow function also works only on ints at compile time
18 Replies
mad alex 1997
mad alex 19978mo ago
Seems that the entire math module doesn't like alias. Even wrapping the functions in @parameter functions doesn't sidestep this. What are you trying to accomplish that requires the fairly fast atan function result (about 10^-10 seconds on my 4.5GHz machine) to be done at compile time rather than once and saved in each run time? atanf is the underlying mechanism for atan, likely part of the internal MLIR dialect kgen. The math library isn't open source yet but the long form of the error gives some insight into it.
Sagi
SagiOP8mo ago
it was just for convenience but also I just wandered why it didn't work but the power and log2 function was important because for some image processing task I needed to use the SIMD and because it needs to be in a size of a power of 2 and given at compile time its nice that you can enter any frame size and it will round it up into a power of 2 Thanks for answering!
sora
sora8mo ago
Even wrapping the functions in @parameter functions doesn't sidestep this.
Why would this help anyways?
atanf is the underlying mechanism for atan, likely part of the internal MLIR dialect kgen.
Which atanf are you referin to here? Could you provide with some references?
mad alex 1997
mad alex 19978mo ago
I thought there was a possibility that the function was being treated as intrinsically for run time only use and that wrapping with an @parameter it would tell the compiler that it was ok to run on compile. Interestingly enough it wasn't and the same applies to the other math functions I ran. If you run the code to get the error, the error will reference kgen and the atanf function which it seems to get through either FFI or from kgen. Weird, trying to replicate what I saw earlier and now it is just working. Didn't change compiler versions or anything.
sora
sora8mo ago
atan is not implemented in Mojo then, it seems. That's werid
mad alex 1997
mad alex 19978mo ago
Ok it working is just repl weirdness just running
import math
alias a = math.atan(Float32(3))
def main():
print(a)
import math
alias a = math.atan(Float32(3))
def main():
print(a)
Cuases
mad alex 1997
mad alex 19978mo ago
I am really curious how the math library is actually implemented. So far I have been able to make a faster hypot function using fuse multipy add, and I am looking at the rest of trig becuase it is all , for Mojo seemingly, very slow.
sora
sora8mo ago
Regarding the slowness, do you mind file an issue?
mad alex 1997
mad alex 19978mo ago
It's more of an "I feel like I might be able write something faster", they are respectably fast (on the order of 10 nsec) for the individual with a decent SIMD scaling, but I would guess that they are using a type independent approximation that is viable for 64bit float at least if not higher. Rather than having type-dependent approximations such as Taylor expansions of n = 6, 10 and 14 for 16, 32, and 64 bit floats respectively. In other words as a numerics library author, and someone who is focused on making it fast, there might be tradeoffs that I am willing to make (essentially having exactly the level of precision allowed by a type) that modular isn't. When they open the math library I will probably contribute, and see if we can get more of the math in Mojo. Also those n values are guesses I haven't done the math for actually matching precision.
sora
sora8mo ago
Maybe your past experiences says otherwise, but I think implementing numerical algorithms for these functions are tremendously difficult when accuracy in the whole domain is guaranteed etc etc. I also don't think Taylor expansion is the preferred way to implement atan. Polynomial or rational approximations like minimax are more common.
mad alex 1997
mad alex 19978mo ago
I'll try a bunch of stuff, in the end it comes down to accuracy vs number of operations, and Taylor series do have a bunch of operations but I can already think of ways to condense them.
sora
sora8mo ago
That's not the point. Your algorithm has to come with a proof that it does give us the accuracy IEEE requires. For instance, if we consider error for Taylor series in pure math, it only comes from "the finiteness of the terms". But in float point math, they come from everywhere. It's a entirely different discipline.
mad alex 1997
mad alex 19978mo ago
acos, asin have limited domains on real numbers, and atan approaches pi/2 beyond a certain distance from 0. tan is arguably the monster of the standard trig functions becuase sin and cos can just be scaled into and then back out of its proper domain. But I guess I see your point, probably bigger than a side project. I can deal with slightly slower but proven algos in exchange for avoiding the headache.
sora
sora8mo ago
A lot bigger than a side project. There are tons of papers for implementation for a single special function, let along a whole library.
mad alex 1997
mad alex 19978mo ago
Well I have been thinking about getting a Phd in Applied and Computational Mathematics, maybe I'll get around to it eventually.
sora
sora8mo ago
Can't find the reference at this moment, but I think people use automated tools to synthesis these algorithms. https://www.sollya.org
sora
sora8mo ago
GitHub
GitHub - soarlab/FPTaylor: Tool for Rigorous Estimation of Round-Of...
Tool for Rigorous Estimation of Round-Off Floating-Point Errors - soarlab/FPTaylor
GitHub
GitHub - PRUNERS/FLiT: A project to quickly detect discrepancies in...
A project to quickly detect discrepancies in floating point computation across hardware, compilers, libraries and software. - PRUNERS/FLiT
Want results from more Discord servers?
Add your server