Reputation: 732
Given I want to sum the first n terms of a series 1,2,3,.. with the following function in Rust
fn sum_sequence(x: u64) -> u64
{
let mut s: u64 = 0;
for n in 1..=x
{
s = s + n;
}
return s;
}
When I compile it for x64 architecture
cargo build --release
and run it with x=10000000000
the result is 13106511857580896768
- fine.
But when I compile this very function to Webassembly (WASM)
cargo build --target wasm32-unknown-unknown --release
and run it with the same argument as before, x=10000000000
,
wasmtime ./target/wasm32-unknown-unknown/release/sum_it.wasm --invoke sum_sequence 1000000000
Then the result is -5340232216128654848
.
I would not have expected any deviation in results between Rust being compiled to x64 in comparison to Rust being compiled to WASM. Also, from the WASM text file (below), I do not see why I should get a negative result when I run it with WASM.
How does it come that WASM shows a different result and what can I do do correct the calculation of WASM?
(module
(type (;0;) (func (param i64) (result i64)))
(func $sum_sequence (type 0) (param i64) (result i64)
(local i64 i64 i32)
block ;; label = @1
local.get 0
i64.eqz
i32.eqz
br_if 0 (;@1;)
i64.const 0
return
end
i64.const 1
local.set 1
i64.const 0
local.set 2
block ;; label = @1
loop ;; label = @2
local.get 1
local.get 2
i64.add
local.set 2
local.get 1
local.get 1
local.get 0
i64.lt_u
local.tee 3
i64.extend_i32_u
i64.add
local.tee 1
local.get 0
i64.gt_u
br_if 1 (;@1;)
local.get 3
br_if 0 (;@2;)
end
end
local.get 2)
(table (;0;) 1 1 funcref)
(memory (;0;) 16)
(global (;0;) (mut i32) (i32.const 1048576))
(global (;1;) i32 (i32.const 1048576))
(global (;2;) i32 (i32.const 1048576))
(export "memory" (memory 0))
(export "sum" (func $sum))
(export "__data_end" (global 1))
(export "__heap_base" (global 2)))
Upvotes: 2
Views: 393
Reputation: 14614
It seems to be because wasm does not support native u64
as a type, only the signed variants (notably, i64
), which is why it's using i64
as the type for the arithmetic operations. Since this then overflows a 64-bit integer (the correct output is n * (n+1) / 2
, or 50000000005000000000
, you're getting a negative value due to the overflow, which is then getting printed to console. This is due to a lack of type support in wasm.
Just for reference, a Σ n=0 to N := (N * (N+1) / 2
, which I use from here on out since it's much faster computationally, and correct for our purposes.
The result, 50000000005000000000
, takes ~65.4 bits in memory to accurately represent in memory, which is why you get wrapping behavior for x86_64 and wasm, just the types it wraps to are different.
Using NumPy, we can clearly confirm this:
>>> import numpy as np
>>> a = np.uint64(10000000000)
>>> b = np.uint64(10000000001)
>>> (a >> np.uint64(1)) * b
13106511857580896768
>>> import numpy as np
>>> a = np.int64(10000000000)
>>> b = np.int64(10000000001)
>>> (a >> np.int64(1)) * b
-5340232216128654848
The values you are getting are due to unsigned and signed (two's complement) integer overflow. (Note: I'm using a right bitshift to simulate division-by-two, I could probably also use the //
operator).
EDIT: Also, a good point was raised in the comments by Herohtar: it clearly overflows if run in debug mode, panicking with 'attempt to add with overflow'
.
Upvotes: 4