Reputation: 1801
NumPy is an extremely useful library, and from using it I've found that it's capable of handling matrices which are quite large (10000 x 10000) easily, but begins to struggle with anything much larger (trying to create a matrix of 50000 x 50000 fails). Obviously, this is because of the massive memory requirements.
Is there is a way to create huge matrices natively in NumPy (say 1 million by 1 million) in some way (without having several terrabytes of RAM)?
Upvotes: 100
Views: 125270
Reputation: 919
Fifteen years later, but maybe this will be useful for someone. There is the zarr library, but it's still in the early development stage and I have no experience with it:
https://zarr.readthedocs.io/en/stable/index.html
According to the docs it is created exactly for the purpose of keeping large numpy chunks.
Upvotes: 0
Reputation: 107287
Sometimes one simple solution is using a custom type for your matrix items. Based on the range of numbers you need, you can use a manual dtype
and specially smaller for your items. Because Numpy considers the largest type for object by default this might be a helpful idea in many cases. Here is an example:
In [70]: a = np.arange(5)
In [71]: a[0].dtype
Out[71]: dtype('int64')
In [72]: a.nbytes
Out[72]: 40
In [73]: a = np.arange(0, 2, 0.5)
In [74]: a[0].dtype
Out[74]: dtype('float64')
In [75]: a.nbytes
Out[75]: 32
And with custom type:
In [80]: a = np.arange(5, dtype=np.int8)
In [81]: a.nbytes
Out[81]: 5
In [76]: a = np.arange(0, 2, 0.5, dtype=np.float16)
In [78]: a.nbytes
Out[78]: 8
Upvotes: 4
Reputation: 881695
To handle sparse matrices, you need the scipy
package that sits on top of numpy
-- see here for more details about the sparse-matrix options that scipy
gives you.
Upvotes: 24
Reputation: 8141
PyTables and NumPy are the way to go.
PyTables will store the data on disk in HDF format, with optional compression. My datasets often get 10x compression, which is handy when dealing with tens or hundreds of millions of rows. It's also very fast; my 5 year old laptop can crunch through data doing SQL-like GROUP BY aggregation at 1,000,000 rows/second. Not bad for a Python-based solution!
Accessing the data as a NumPy recarray again is as simple as:
data = table[row_from:row_to]
The HDF library takes care of reading in the relevant chunks of data and converting to NumPy.
Upvotes: 96
Reputation: 391852
Are you asking how to handle a 2,500,000,000 element matrix without terabytes of RAM?
The way to handle 2 billion items without 8 billion bytes of RAM is by not keeping the matrix in memory.
That means much more sophisticated algorithms to fetch it from the file system in pieces.
Upvotes: 3
Reputation: 3563
Make sure you're using a 64-bit operating system and a 64-bit version of Python/NumPy. Note that on 32-bit architectures you can address typically 3GB of memory (with about 1GB lost to memory mapped I/O and such).
With 64-bit and things arrays larger than the available RAM you can get away with virtual memory, though things will get slower if you have to swap. Also, memory maps (see numpy.memmap) are a way to work with huge files on disk without loading them into memory, but again, you need to have a 64-bit address space to work with for this to be of much use. PyTables will do most of this for you as well.
Upvotes: 6
Reputation: 156158
Stefano Borini's post got me to look into how far along this sort of thing already is.
This is it. It appears to do basically what you want. HDF5 will let you store very large datasets, and then access and use them in the same ways NumPy does.
Upvotes: 12
Reputation: 33319
numpy.array
s are meant to live in memory. If you want to work with matrices larger than your RAM, you have to work around that. There are at least two approaches you can follow:
scipy.sparse.csc_matrix
.Upvotes: 63
Reputation: 5843
You should be able to use numpy.memmap to memory map a file on disk. With newer python and 64-bit machine, you should have the necessary address space, without loading everything into memory. The OS should handle only keep part of the file in memory.
Upvotes: 32
Reputation: 143795
As far as I know about numpy, no, but I could be wrong.
I can propose you this alternative solution: write the matrix on the disk and access it in chunks. I suggest you the HDF5 file format. If you need it transparently, you can reimplement the ndarray interface to paginate your disk-stored matrix into memory. Be careful if you modify the data to sync them back on the disk.
Upvotes: 1