Are literals splatted in SIMD Operations?
Is there any performance difference if I write this
instead of this?
In other words, is the literal
2
transformed into a splatted SIMD vector at runtime every time multiply_by_two
is called? Or is this something that the compiler is able to optimize?3 Replies
It seems like there must be some tiny overhead to concert convert from FloatLiteral to SIMD, but only by running the benchmark module could you say for sure.
isn't that price prepaid at compile time?
This question was answered here: https://github.com/modularml/mojo/discussions/1416#discussioncomment-7986953
GitHub
Are literals splatted in SIMD Operations? · modularml mojo · Discus...
Is there any performance difference if I write this fn multiply_by_two[ dtype: DType, simd_width: Int ](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]: alias splatted_two = SIMD[dtype, s...