Reputation: 1905
Instead of converting an entire CSV file to an object, is there a simple API that takes in one csv or tsv string, and converts it to an object? The api's I've found so far are geared towards csv/tsv FIlE to list of objects.
Obviously I could just split the String and call a constructor, but was wondering if there was a clean api I could use.
Upvotes: 3
Views: 6105
Reputation: 3258
I was currently dealing with a similar issue. in my case I wanted to import a single csv row at a time into a single pojo as I was getting my data in the form of discrete single line websocket updates. at the end jackson worked best for me as I didnt have to put everything into a list of pojos first.
here the code
String csvString="rick|sanchez|99"
private CsvMapper mapper=new CsvMapper();
private CsvSchema schema = mapper.schemaFor(Pojo.class).withColumnSeparator('|');
private ObjectReader r=mapper.readerFor(Pojo.class).with(schema);
Pojo pojo=r.readValue(csvString);
for this to work you also ned to add the following annotation to your pojo
@JsonPropertyOrder({"firstName","lastName","age"})
as far as I know its the only one that easily lets you parse a single csv line into a single pojo instance. obviously you could also do this over a constructor by hand but these libraries deal with with type conversions for you so its particularly useful if your pojo contains lots of different attributes
Upvotes: 0
Reputation: 6289
Go with uniVocity-parsers as it is at least twice as fast than SuperCSV and has way more features.
For example, let's say your bean is:
class TestBean {
// if the value parsed in the quantity column is "?" or "-", it will be replaced by null.
@NullString(nulls = { "?", "-" })
// if a value resolves to null, it will be converted to the String "0".
@Parsed(defaultNullRead = "0")
private Integer quantity; // The attribute type defines which conversion will be executed when processing the value.
// In this case, IntegerConversion will be used.
// The attribute name will be matched against the column header in the file automatically.
@Trim
@LowerCase
// the value for the comments attribute is in the column at index 4 (0 is the first column, so this means fifth column in the file)
@Parsed(index = 4)
private String comments;
// you can also explicitly give the name of a column in the file.
@Parsed(field = "amount")
private BigDecimal amount;
@Trim
@LowerCase
// values "no", "n" and "null" will be converted to false; values "yes" and "y" will be converted to true
@BooleanString(falseStrings = { "no", "n", "null" }, trueStrings = { "yes", "y" })
@Parsed
private Boolean pending;
Now, to read your input as a list of TestBean
// BeanListProcessor converts each parsed row to an instance of a given class, then stores each instance into a list.
BeanListProcessor<TestBean> rowProcessor = new BeanListProcessor<TestBean>(TestBean.class);
CsvParserSettings parserSettings = new CsvParserSettings();
parserSettings.setRowProcessor(rowProcessor);
parserSettings.setHeaderExtractionEnabled(true);
CsvParser parser = new CsvParser(parserSettings);
parser.parse(getReader("/examples/bean_test.csv"));
// The BeanListProcessor provides a list of objects extracted from the input.
List<TestBean> beans = rowProcessor.getBeans();
To parse TSV files, just change the combination of CsvParserSettings
& CsvParser
to TsvParserSettings
& TsvParser
.
Disclosure: I am the author of this library. It's open-source and free (Apache V2.0 license).
Upvotes: 3
Reputation: 7706
You can do this with Jackson. It looks pretty similar to the other answers but seems to perform better than SuperCSV according to their tests.
Define your POJO (both the annotation and constructor seems to be necessary):
@JsonPropertyOrder({ "foo", "bar" })
public class FooBar {
private String foo;
private String bar;
public FooBar() {
}
// Setters, getters, toString()
}
Then parse it:
String input = "1,2\n3,4";
StringReader reader = new StringReader(input);
CsvMapper m = new CsvMapper();
CsvSchema schema = m.schemaFor(FooBar.class).withoutHeader().withLineSeparator("\n").withColumnSeparator(',');
try {
MappingIterator<FooBar> r = m.reader(FooBar.class).with(schema).readValues(reader);
while (r.hasNext()) {
System.out.println(r.nextValue());
}
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Upvotes: 5
Reputation: 14580
In the case of SuperCSV which you mentioned in a comment, you could pass it a String
wrapped in a StringReader
, i.e.
CsvBeanReader beanReader=new CsvBeanReader(new StringReader(theString), preferences);
beanReader.read(theBean, nameMapping);
Upvotes: 0
Reputation: 2414
I'm using this Api: http://jsefa.sourceforge.net/
You can use annotations to convert your entities in CSV.
Upvotes: 1