Reputation: 53
I have two dataframes with multiple variables. Each dataframe belongs to one rater. I would like to calculate the interrater reliability (Cohen's Kappa) between the two dataframes.
For example:
Rater1 <- matrix(c(1,0,0,1,0,0,0,0,1),ncol=3,byrow=TRUE)
colnames(Rater1)<-c("V1","V2","V3")
Rater2 <- matrix(c(0,1,0,1,0,1,0,0,1),ncol=3,byrow=TRUE)
colnames(Rater2)<-c("V1","V2","V3")
It must have something to do with the 'IRR' package, but I really can't figure out how. Any help in the right direction is very much appreciated.
Upvotes: 1
Views: 1324
Reputation: 328
Using the data you provided, you can calculate the kappa for each variable with the following code:
for (dimension in 1:3) {
v = paste0("V", dimension)
print(irr::kappa2(cbind(Rater1[, v], Rater2[, v])))
}
You said you wanted the kappa between the two data frames however, which means we need to somehow collapse the data frames into two vectors. All you need to do is alter your definition of subjects to be a variable for anything being rated. You can essentially ignore the fact that the subjects are coming from the same source because you are interested in the the agreement between the raters (who are independent) not within the features of the things being rated (which are not independent).
irr::kappa2(cbind(matrix(Rater1), matrix(Rater2)))
Upvotes: 2