Toolbox
Toolbox

Reputation: 2491

Remove neighbour values that are duplicates in xts

In [xts1$master_decision] I am am trying to remove rows which values are identical to the value one cell above. I am aiming to do this with R base without involving any further packages.

If there is a way of solving this vectorized, skipping the for-loop, that is fine also.

# --------------------------------------
# Construct xts data.
# --------------------------------------

rows_to_build <- 6

dates <- seq(
  as.POSIXct("2019-01-01 09:01:00"),
  length.out = rows_to_build,
  by = "1 min",
  tz = "CEST"
  )

master_decision = c(
            # - Clarification what "for-loop" should do:
    3,      # Keep (missing [3] in cell above)
    2,      # Keep (missing [2] in cell above)
    2,      # Delete due to [2] in cell above)
    3,      # Keep (missing [3] in cell above)
    3,      # Delete due to [3] in cell above)
    2       # Keep (missing [2] in cell above)
)

data <- data.frame(master_decision)
xts1 <- xts(x = data, order.by = dates)


rm(list = ls()[! ls() %in% c("xts1")]) # Only keep [xts1].


# ------------------------------------------------------------
# For loop with purpose to remove duplicates that are grouped.
# ------------------------------------------------------------

for (i in 2:nrow(xts1)) {
    if(xts1[[i]] == xts1[[i-1]]) {
        xts1[-c(i)]
    }
}

xts1 prior to running for-loop:

                    master_decision
2019-01-01 09:01:00               3
2019-01-01 09:02:00               2
2019-01-01 09:03:00               2
2019-01-01 09:04:00               3
2019-01-01 09:05:00               3
2019-01-01 09:06:00               2

Outcome (row with timestamp [09:04:00] deleted:

                    master_decision
2019-01-01 09:01:00               3
2019-01-01 09:02:00               2
2019-01-01 09:03:00               2
2019-01-01 09:04:00               3
2019-01-01 09:06:00               2

Wanted outcome: (row with timestamp [09:04:00] & [09:05:00] deleted

2019-01-01 09:01:00               3
2019-01-01 09:02:00               2
2019-01-01 09:04:00               3
2019-01-01 09:06:00               2

Upvotes: 1

Views: 106

Answers (2)

otwtm
otwtm

Reputation: 1999

This does the job as well. Get the first indeces of the sequences of identical values and then filter by those.

idx <-cumsum(c(1,rle(master_decision)$lengths))
idx <- idx[-length(idx)]

xts1 <- xts(x = master_decision[idx], order.by = dates[idx])

2019-01-01 09:01:00    3
2019-01-01 09:02:00    2
2019-01-01 09:04:00    3
2019-01-01 09:06:00    2

Upvotes: 4

Ronak Shah
Ronak Shah

Reputation: 389175

You could use coredata from zoo and keep the values which are different than the previous value by subsetting the data.

library(zoo)
xts1[c(TRUE, coredata(xts1)[-length(xts1)] != coredata(xts1)[-1]), ]

#                    master_decision
#2019-01-01 09:01:00               3
#2019-01-01 09:02:00               2
#2019-01-01 09:04:00               3
#2019-01-01 09:06:00               2

Or to keep it completely in base R, use as.numeric

xts1[c(TRUE, as.numeric(xts1)[-length(xts1)] != as.numeric(xts1)[-1]), ]

Another option is to use head/tail instead of -length(xts1) and -1 to subset

xts1[c(TRUE, tail(as.numeric(xts1), -1) != head(as.numeric(xts1), -1)), ]

Upvotes: 3

Related Questions