Jay
Jay

Reputation: 2868

Serialize a group of integers using cython

I saw this sample code from the Pyrobuf page for serializing an integer ~3 times faster than via struct.pack:

def ser2(): 
    cdef int x = 42 
    return (<char *>&x)[:sizeof(int)]

I was wondering how this could be done for a group of integers. I saw cython has int[:] and array.array types, but I still don't understand how do I take a list of integers for example, and get the same (but faster) result as via struct.pack('i', *num_list). map() didn't seem work faster for me, and I'm wondering how this should be done.

Upvotes: 1

Views: 786

Answers (1)

ead
ead

Reputation: 34357

I assume you want to speed up the following (Python3):

import struct 
lst=list(range(100))  #any other size
struct.pack('i'*len(lst), *lst)

without struct and cython you could achieve it as following in python:

import array
bytes(array.array('i', lst))

this is however somewhat slower than the struct-module:

>>> %timeit struct.pack('i'*len(lst), *lst)
2.38 µs ± 9.48 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit bytes(array.array('i',lst))
3.94 µs ± 92 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

However, cython can be used to speed up the creation of the array, for the documentation see here(arrays) and here(str/bytes):

 %%cython
 import array
 from cpython cimport array
 def ser_int_list(lst):
    cdef Py_ssize_t n=len(lst)
    cdef array.array res=array.array('i')
    array.resize(res, n)  #preallocate memory
    for i in range(n):
       res.data.as_ints[i]=lst[i]            #lst.__get__() needs Python-Integer, so let i be a python-integer (not cdef)
    return res.data.as_chars[:n*sizeof(int)] # str for python2, bytes for python3

The timings show the following performance:

#list_size    struct-code    cython-code    speed-up
     1           343 ns         238 ns        1.5
    10           619 ns         283 ns         2
   100          2.38 µs        2.38 µs        3.5
  1000          21.6 µs        5.11 µs         4
 10000           266 µs        47.5 µs        5.5 

i.e. cython provides some speed-up, from 1.5 for small list up to 5.5 for large lists.

Probably this could be tweaked even further, but I hope you get the idea.


Testing code:

import struct
for n in [1, 10,10**2, 10**3, 10**4]:
    print ("N=",n)
    lst=list(range(n))
    print("struct:")
    %timeit struct.pack('i'*len(lst), *lst)
    print("cython:")
    %timeit ser_int_list(lst)

Upvotes: 3

Related Questions