Are literals splatted in SIMD Operations?

Is there any performance difference if I write this
fn multiply_by_two[
dtype: DType, simd_width: Int
](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]:
alias splatted_two = SIMD[dtype, simd_width](2.)
return splatted_two * x
fn multiply_by_two[
dtype: DType, simd_width: Int
](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]:
alias splatted_two = SIMD[dtype, simd_width](2.)
return splatted_two * x
instead of this?
fn multiply_by_two[
dtype: DType, simd_width: Int
](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]:
return 2.0 * x
fn multiply_by_two[
dtype: DType, simd_width: Int
](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]:
return 2.0 * x
In other words, is the literal 2 transformed into a splatted SIMD vector at runtime every time multiply_by_two is called? Or is this something that the compiler is able to optimize?
3 Replies
guidorice
guidorice13mo ago
It seems like there must be some tiny overhead to concert convert from FloatLiteral to SIMD, but only by running the benchmark module could you say for sure.
sora
sora13mo ago
isn't that price prepaid at compile time?
Leandro Campos
Leandro CamposOP12mo ago
GitHub
Are literals splatted in SIMD Operations? · modularml mojo · Discus...
Is there any performance difference if I write this fn multiply_by_two[ dtype: DType, simd_width: Int ](x: SIMD[dtype, simd_width]) -> SIMD[dtype, simd_width]: alias splatted_two = SIMD[dtype, s...
Want results from more Discord servers?
Add your server