Bossie
Bossie

Reputation: 3612

How to map CSV records to a bean?

I'm looking for a Java library that can help me parse a CSV file containing pipe-delimited records and create instances of my bean class from them.

I've looked into several alternatives such as SuperCSV, OpenCSV, BeanIO, JFileHelper, jsefa, ... but neither of them seems to have what it takes.

Requirements of the library:

  1. support records with a variable number of fields
  2. provide iterator-style access so the file is never loaded entirely into memory
  3. support mapping a field to its actual type, i.e. be able to take an date field and put it a java.util.Date into my bean instead of a string
  4. let me supply my own factory object to create the beans from, instead of defaulting to Class.newInstance()

All the libraries I've looked into seem to lack requirement #4.

I can live with reflection, but the problem is that it still creates a new bean object for every line in the CSV file. Since the only thing I want to do with my bean at this point is pass it to my persistence layer and store it in the DB, it makes sense to put a couple of the bean instances into a pool and create a factory that takes instances from this pool. This way I can re-use my instances and parsing a 100000 line CSV file won't result in 100000 instances living in memory until the GC comes along.

Does anyone know of a library that can handle all these requirements?

Upvotes: 2

Views: 931

Answers (1)

stenix
stenix

Reputation: 3106

This might be an alternative: https://github.com/org-tigris-jsapar/jsapar
It will probably fall short on requirement #4 though.

Here, you can find a more comprehensive list of alternatives: https://org-tigris-jsapar.github.io/jsapar/links

EDIT
As of jsapar version 1.8, it is now possible to customize Java object creation in an external factory class so I guess that requirement #4 is now also complied to.

Upvotes: 1

Related Questions