LA_
LA_

Reputation: 20429

How to remove duplicated rows with less data with R?

Let's say I have the following data table (as data):

row,or,d,ddate,rdate,changes,class,price,fdate,company,number,minutes,added,source
1,VA1,VA2,2014-05-24,,0,0,2124,2014-05-22 15:50:16,,,,2014-05-22 12:20:03,tp
2,VA1,VA2,2014-05-26,,0,0,2124,2014-05-22 15:03:44,,,,2014-05-22 12:20:03,tp
3,VA1,VA2,2014-05-26,,0,0,2124,2014-05-22 15:03:44,A1,,,2014-05-22 12:20:03,tp
4,VA1,VA2,2014-06-05,,0,0,2124,2014-05-22 15:48:24,,,,2014-05-22 12:20:03,tp
5,VA1,VA2,2014-06-09,,0,0,2124,2014-05-22 15:37:35,,,,2014-05-22 12:20:03,tp
6,VA1,VA2,2014-06-16,,0,0,2124,2014-05-22 14:17:33,,,,2014-05-22 12:20:03,tp
7,VA1,VA2,2014-06-16,,0,0,2124,2014-05-22 14:17:33,,,,2014-05-22 12:20:03,tp

I would like to delete duplicated rows. If I do data <- unique(data, by = NULL), then only the last row (row 7) is deleted, but I would like to delete row 2 also. I can define keys with setkey():

setkey(data, row,or,d,ddate,rdate,changes,class,price,fdate,number,minutes,added,source)

, and it will delete either row 2 or row 3. But I would like to delete rows, which has less data and keep rows with more data. I.e. in the case above, row 2 should be deleted, but row 3 should remain since it has additional value in column company. How can I do it?

Upvotes: 1

Views: 301

Answers (1)

eddi
eddi

Reputation: 49448

How about this:

# whatever the important columns are for your uniqueness criterion
important.cols = c('or','d','ddate','rdate','changes','class','price','fdate')

# pick row with max number of non-empty elements
dt[, .SD[which.max(rowSums(.SD != "", na.rm = T))], by = important.cols]

Upvotes: 0

Related Questions