Reputation: 17516
I know that Python maintains an internal storage of small-ish integers rather than creating them at runtime:
id(5)
4304101544
When repeating this code after some time in the same kernel, the id
is stable over time:
id(5)
4304101544
I thought that this wouldn't work for floating point numbers because it can't possibly maintain a pre-calculated list of all floating point numbers.
However this code returns the same id
twice.
id(4.33+1), id(5.33)
(5674699600, 5674699600)
After some time repeating the same code returns some different location in memory:
id(4.33 + 1), id(5.33)
(4962564592, 4962564592)
What's going on here?
Upvotes: 4
Views: 396
Reputation: 110591
The id
mechanisms for cPython are not only implementation dependent: they are dependent on several runtime optimizations that may or may not be triggered by subtle code or context changes, along with the current interpreter state - and that should never, ever - NOT EVEN THIS ONCE - be relied upon.
That said, what you hit is a completely different mechanism than the small integer caching - what you have is space-reutilization in the interpreter memory pool for objects.
In this case, you are hitting a cache for floats in the same code-block, yes, along with a compile-time optimization which resolves constant operations, such as "1" at compile time (even if "compile" is instant when you press enter in the repl)
In [39]: id(1 + 4.33), id(5.33)
Out[39]: (139665743642672, 139665743642672)
^Even with a reference to the first float, the second one shares the same object: this is one kind of optimization.
What could be happening was also:
id(4.33+1), id(5.33)
This is what takes place under the hood:
Python instantiate (or copy from a co-object specific constant objects space) the "4.33" number, then "instantiates" the "1" - (and this will usually hit the optimization path for reusing small integers - but do not rely on that either), resolves the "+" and instantiates the 5.33
. Then it uses this number in the call to id
, when that returns, there are no remaining references to 5.33
and the object is deleted.
Then, after the ,
, Python instantiates a new 5.33
- in the same memory location, by coincidence, occupied by the previous 5.33
, and the numbers happen to match.
Just keep an instance to the former number around, and you would see the different ID:
In [41]: id(old:=(one + 4.33)), id(5.33)
Out[41]: (139665742657488, 139665743643856)
A reference kept around for the first number, and no binary operation of literals, which is optimized at text->bytecode time: different objects
Upvotes: 4
Reputation: 17516
It's not just that the object is garbage collected and and the new object stored in the same location as the previous one after garbage collection.
Something different is at work here.
We can use the dis
module to look at the bytecode generated:
import dis
def f():
one, two = 4.3333333, 3.3333333 + 1.
a, b = id(one), id(two)
return one, two, a, b
dis.dis(f)
one, two, a, b = f()
shows us the bytecode generated:
1 0 RESUME 0
2 2 LOAD_CONST 1 ((4.3333333, 4.3333333))
4 UNPACK_SEQUENCE 2
8 STORE_FAST 0 (one)
10 STORE_FAST 1 (two)
3 12 LOAD_GLOBAL 1 (NULL + id)
24 LOAD_FAST 0 (one)
26 PRECALL 1
30 CALL 1
40 LOAD_GLOBAL 1 (NULL + id)
52 LOAD_FAST 1 (two)
54 PRECALL 1
58 CALL 1
68 STORE_FAST 3 (b)
70 STORE_FAST 2 (a)
4 72 LOAD_FAST 0 (one)
74 LOAD_FAST 1 (two)
76 LOAD_FAST 2 (a)
78 LOAD_FAST 3 (b)
80 BUILD_TUPLE 4
82 RETURN_VALUE
(4.3333333, 4.3333333, 12424698960, 12424698960)
The id
of one
and two
are also stable over time:
>>> id(one), id(two)
(12424698960, 12424698960)
They are indeed the same object, because the interpreter optimizes the addition before the bytecode is generated.
Upvotes: 1