Shweta
Shweta

Reputation: 1161

reading rows of big csv file in python

I have a very big csv file which I cannot load in memory in full. So I want to read it piece by piece, convert it into numpy array and then do some more processing.

I already checked: Lazy Method for Reading Big File in Python?

But problem here is that it is a normal reader, and I am unable to find any option of specifying size in csvReader.

Also since I want to convert rows into numpy array, i dont want to read any line in half, so rather than specifying size, I want something where I can specify "no of rows" in reader.

Is there any built-in function or easy way to do it.

Upvotes: 5

Views: 2341

Answers (2)

dano
dano

Reputation: 94881

The csv.reader won't read the whole file into memory. It lazily iterates over the file, line by line, as you iterate over the reader object. So you can just use the reader as you normally would, but break from your iteration after you're read however many lines you want to read. You can see this in the C-code used to implement the reader object.

Initializer for the reader objecT:
static PyObject *
csv_reader(PyObject *module, PyObject *args, PyObject *keyword_args)
{
    PyObject * iterator, * dialect = NULL;
    ReaderObj * self = PyObject_GC_New(ReaderObj, &Reader_Type);

    if (!self)
        return NULL;

    self->dialect = NULL;
    self->fields = NULL;
    self->input_iter = NULL;
    self->field = NULL;
    // stuff we dont care about here
    // ...
    self->input_iter = PyObject_GetIter(iterator);  // here we save the iterator (file object) we passed in
    if (self->input_iter == NULL) {
        PyErr_SetString(PyExc_TypeError,
                        "argument 1 must be an iterator");
        Py_DECREF(self);
        return NULL;
    }

static PyObject *
Reader_iternext(ReaderObj *self)  // This is what gets called when you call `next(reader_obj)` (which is what a for loop does internally)
{
    PyObject *fields = NULL;
    Py_UCS4 c;
    Py_ssize_t pos, linelen;
    unsigned int kind;
    void *data;
    PyObject *lineobj;

    if (parse_reset(self) < 0)
        return NULL;
    do {
        lineobj = PyIter_Next(self->input_iter);  // Equivalent to calling `next(input_iter)`
        if (lineobj == NULL) {
            /* End of input OR exception */
            if (!PyErr_Occurred() && (self->field_len != 0 ||
                                      self->state == IN_QUOTED_FIELD)) {
                if (self->dialect->strict)
                    PyErr_SetString(_csvstate_global->error_obj,
                                    "unexpected end of data");
                else if (parse_save_field(self) >= 0)
                    break;
            }
            return NULL;
        }

As you can see, next(reader_object) calls next(file_object) internally. So you're iterating over both line by line, without reading the entire thing into memory.

Upvotes: 2

JimmyK
JimmyK

Reputation: 1040

I used this function. The basic idea is to make a generator to yield the numbers in the file.

def iter_loadtxt(filename, delimiter=',', skiprows=0, read_range=None, dtype=float):
    '''
    Read the file line by line and convert it to Numpy array.
    :param delimiter: character
    :param skiprows : int
    :param read_range: [int, int] or None. set it to None and the function will read the whole file.
    :param dtype: type
    '''
    def iter_func():
        with open(filename, 'r') as infile:
            for _ in range(skiprows):
                next(infile)
            if read_range is None:
                for line in infile:
                    line = line.rstrip().split(delimiter)
                    for item in line:
                        yield dtype(item)
            else:
                counter = 0
                for line in infile:
                    if counter < read_range[0]:
                        counter += 1
                    else:
                        counter += 1
                        for item in line:
                            yield dtype(item)

                    if counter >= read_range[1]:
                        break

        iter_loadtxt.rowlength = len(line)

    data = np.fromiter(iter_func(), dtype=dtype)
    data = data.reshape((-1, iter_loadtxt.rowlength))
    return data

Upvotes: 0

Related Questions