Reputation: 3461
I have an object which needs to be "tagged" with 0-3 strings (out of a set of 20-some possibilities); these values are all unique and order doesn't matter. The only operation that needs to be done on the tags is checking if a particular one is present or not (specific_value in self.tags
).
However, there's an enormous number of these objects in memory at once, to the point that it pushes the limits of my old computer's RAM. So saving a few bytes can add up.
With so few tags on each object, I doubt the lookup time is going to matter much. But: is there a memory difference between using a tuple and a frozenset here? Is there any other real reason to use one over the other?
Upvotes: 5
Views: 1484
Reputation: 2504
`There is a possibility to reduce memory if replace tuple with a type from recordclass library:
>>> from recordclass import make_arrayclass
>>> Triple = make_arrayclass("Triple", 3)
>>> from sys import getsizeof as sizeof
>>> sizeof(Triple("ab","cd","ef"))
40
>>> sizeof(("ab","cd","ef"))
64
The difference is equal to the sizeof(PyGC_Head)
+ sizeof(Py_ssize_t)
.
P.S.: The numbers are mesured on 64-bit Python 3.8.
Upvotes: 1
Reputation: 70715
Tuples are very compact. Sets are based on hash tables, and depend on having "empty" slots to make hash collisions less likely.
For a recent enough version of CPython, sys._debugmallocstats()
displays lots of potentially interesting info. Here under a 64-bit Python 3.7.3:
>>> from sys import _debugmallocstats as d
>>> tups = [tuple("abc") for i in range(1000000)]
tuple("abc")
creates a tuple of 3 1-character strings, ('a', 'b', 'c')
. Here I'll edit out almost all the output:
>>> d()
Small block threshold = 512, in 64 size classes.
class size num pools blocks in use avail blocks
----- ---- --------- ------------- ------------
...
8 72 17941 1004692 4
Since we created a million tuples, it's a very good bet that the size class using 1004692 blocks is the one we want ;-) Each of the blocks consumes 72 bytes.
Switching to frozensets instead, the output shows that those consume 224 bytes each, a bit over 3x more:
>>> tups = [frozenset(t) for t in tups]
>>> d()
Small block threshold = 512, in 64 size classes.
class size num pools blocks in use avail blocks
----- ---- --------- ------------- ------------
...
27 224 55561 1000092 6
In this particular case, the other answer you got happens to give the same results:
>>> import sys
>>> sys.getsizeof(tuple("abc"))
72
>>> sys.getsizeof(frozenset(tuple("abc")))
224
While that's often true, it's not always so, because an object may require allocating more bytes than it actually needs, to satisfy HW alignment requirements. getsizeof()
doesn't know anything about that, but _debugmallocstats()
shows the number of bytes Python's small-object allocator actually needs to use.
For example,
>>> sys.getsizeof("a")
50
On a 32-bit box, 52 bytes actually need to be used, to provide 4-byte alignment. On a 64-bit box, 8-byte alignment is currently required, so 56 bytes need to be used. Under Python 3.8 (not yet released), on a 64-bit box 16-byte alignment is required, and 64 bytes will need to be used.
But ignoring all that, a tuple will always need less memory than any form of set with the same number of elements - and even less than a list with the same number of elements.
Upvotes: 7
Reputation: 4275
sys.getsizeof
seems like the stdlib
option you want... but I feel queasy about your whole use case
import sys
t = ("foo", "bar", "baz")
f = frozenset(("foo","bar","baz"))
print(sys.getsizeof(t))
print(sys.getsizeof(f))
https://docs.python.org/3.7/library/sys.html#sys.getsizeof
All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific.
...So don't get comfy with this solution
EDIT: Obviously @TimPeters answer is more correct...
Upvotes: 4
Reputation: 1633
If you're trying to save memory, consider
dict
mapping from the object (identity) to a 32-bit integer (flags). If no flags are present, no entry in the dictionary.Upvotes: 2