The following function silently casts the arguments to f32 in the generated WGSL, resulting in loss of precision:
function integerDiv(a: number, b: number){
"use gpu";
return d.u32(a) / d.u32(b);
}
I expected it to work like this (working) function:
const integerDiv = tgpu.fn([d.u32, d.u32], d.u32)`(l: u32, r: u32) -> u32 {
return l / r;
}`;
TypeGPU version: 0.10.2
The following function silently casts the arguments to f32 in the generated WGSL, resulting in loss of precision:
I expected it to work like this (working) function:
TypeGPU version: 0.10.2