Reputation: 7986
I have a dataframe and I would like to count the number of rows within each group. I reguarly use the aggregate
function to sum data as follows:
df2 <- aggregate(x ~ Year + Month, data = df1, sum)
Now, I would like to count observations but can't seem to find the proper argument for FUN
. Intuitively, I thought it would be as follows:
df2 <- aggregate(x ~ Year + Month, data = df1, count)
But, no such luck.
Any ideas?
Some toy data:
set.seed(2)
df1 <- data.frame(x = 1:20,
Year = sample(2012:2014, 20, replace = TRUE),
Month = sample(month.abb[1:3], 20, replace = TRUE))
Upvotes: 159
Views: 370422
Reputation: 1253
You can also use fcount
from my package timeplyr that accepts dplyr syntax but uses collapse under the hood.
library(collapse)
library(timeplyr)
library(dplyr)
library(data.table)
library(microbenchmark)
set.seed(1)
df <- data.frame(x = gl(1000, 100),
y = rbinom(100000, 4, .5),
z = runif(100000))
dt <- df
mb <-
microbenchmark(
aggregate = aggregate(z ~ x + y, data = df, FUN = length),
count = count(df, x, y),
data.table = setDT(dt)[, .N, by = .(x, y)],
'collapse::fcount' = collapse::fcount(df, x, y),
'timeplyr::fcount1' = timeplyr::fcount(df, x, y),
'timeplyr::fcount2' = timeplyr::fcount(df, .cols = c("x", "y"), order = FALSE)
)
mb
#> Unit: milliseconds
#> expr min lq mean median uq max
#> aggregate 84.0802 105.10615 123.593910 115.97675 134.65225 255.7676
#> count 40.8108 50.82485 60.718189 56.81630 68.85530 97.4791
#> data.table 3.7106 5.07485 6.273698 5.66645 6.44855 20.0465
#> collapse::fcount 1.0118 1.37400 1.915809 1.61105 2.08465 13.9825
#> timeplyr::fcount1 3.0390 3.74840 5.361852 4.56755 5.83405 44.0072
#> timeplyr::fcount2 1.3787 1.98625 2.640338 2.47025 3.03450 8.6333
#> neval
#> 100
#> 100
#> 100
#> 100
#> 100
#> 100
Created on 2023-11-22 with reprex v2.0.2
Upvotes: 0
Reputation: 51894
Two very fast collapse
options are GRPN
and fcount
. fcount
is a fast version of dplyr::count
and uses the same syntax. You can use add = TRUE
to add it a as a column (mutate
-like):
library(collapse)
fcount(df1, Year, Month) #or df1 %>% fcount(Year, Month)
# Year Month N
# 1 2012 Feb 4
# 2 2014 Jan 3
# 3 2013 Mar 2
# 4 2013 Feb 2
# 5 2012 Jan 2
# 6 2012 Mar 2
# 7 2013 Jan 1
# 8 2014 Feb 3
# 9 2014 Mar 1
GRPN
is closer to collapse
's original syntax. First, group the data with GRP
. Then use GRPN
. By default, GRPN
creates an expanded vector that match the original data. (In dplyr
, it would be equivalent to using mutate
). Use expand = FALSE
to output the summarized vector.
library(collapse)
GRPN(GRP(df1, .c(Year, Month)), expand = FALSE)
Microbenchmark with a 100,000 x 3 data frame and 4997 different groups.
collapse::fcount
is much faster than any other option.
library(collapse)
library(dplyr)
library(data.table)
library(microbenchmark)
set.seed(1)
df <- data.frame(x = gl(1000, 100),
y = rbinom(100000, 4, .5),
z = runif(100000))
dt <- df
mb <-
microbenchmark(
aggregate = aggregate(z ~ x + y, data = df, FUN = length),
count = count(df, x, y),
data.table = setDT(dt)[, .N, by = .(x, y)],
'collapse::fnobs' = df %>% fgroup_by(x, y) %>% fsummarise(number = fnobs(z)),
'collapse::GRPN' = GRPN(GRP(df, .c(x, y)), expand = FALSE),
'collapse::fcount' = fcount(df, x, y)
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# aggregate 159.5459 203.87385 227.787186 223.93050 246.36025 335.0302 100
# count 55.1765 63.83560 74.715889 73.60195 79.20170 196.8888 100
# data.table 8.4483 15.57120 18.308277 18.10790 20.65460 31.2666 100
# collapse::fnobs 3.3325 4.16145 5.695979 5.18225 6.27720 22.7697 100
# collapse::GRPN 3.0254 3.80890 4.844727 4.59445 5.50995 13.6649 100
# collapse::fcount 1.2222 1.57395 3.087526 1.89540 2.47955 22.5756 100
Upvotes: 3
Reputation: 23630
The tidyverse/dplyr way:
library(dplyr)
df1 %>% count(Year, Month)
Upvotes: 108
Reputation: 221
I usually use table function
df <- data.frame(a=rep(1:8,rep(c(1,2,3, 4),2)),year=2011:2021,month=c(1,3:10))
new_data <- as.data.frame(table(df[,c("year","month")]))
Upvotes: 1
Reputation: 886938
Using collapse
package in R
library(collapse)
library(magrittr)
df %>%
fgroup_by(year, month) %>%
fsummarise(number = fNobs(x))
Upvotes: 5
Reputation: 85
library(tidyverse)
df_1 %>%
group_by(Year, Month) %>%
summarise(count= n())
Upvotes: 3
Reputation:
If your trying the aggregate solutions above and you get the error:
invalid type (list) for variable
Because you're using date or datetime stamps, try using as.character on the variables:
aggregate(x ~ as.character(Year) + Month, data = df, FUN = length)
On one or both of the variables.
Upvotes: 0
Reputation: 6860
Create a new variable Count
with a value of 1 for each row:
df1["Count"] <-1
Then aggregate dataframe, summing by the Count
column:
df2 <- aggregate(df1[c("Count")], by=list(Year=df1$Year, Month=df1$Month), FUN=sum, na.rm=TRUE)
Upvotes: 22
Reputation: 24945
dplyr
package does this with count
/tally
commands, or the n()
function:
First, some data:
df <- data.frame(x = rep(1:6, rep(c(1, 2, 3), 2)), year = 1993:2004, month = c(1, 1:11))
Now the count:
library(dplyr)
count(df, year, month)
#piping
df %>% count(year, month)
We can also use a slightly longer version with piping and the n()
function:
df %>%
group_by(year, month) %>%
summarise(number = n())
or the tally
function:
df %>%
group_by(year, month) %>%
tally()
Upvotes: 61
Reputation: 1907
There are plenty of wonderful answers here already, but I wanted to throw in 1 more option for those wanting to add a new column to the original dataset that contains the number of times that row is repeated.
df1$counts <- sapply(X = paste(df1$Year, df1$Month),
FUN = function(x) { sum(paste(df1$Year, df1$Month) == x) })
The same could be accomplished by combining any of the above answers with the merge()
function.
Upvotes: 0
Reputation: 2048
You can use by
functions as by(df1$Year, df1$Month, count)
that will produce a list of needed aggregation.
The output will look like,
df1$Month: Feb
x freq
1 2012 1
2 2013 1
3 2014 5
---------------------------------------------------------------
df1$Month: Jan
x freq
1 2012 5
2 2013 2
---------------------------------------------------------------
df1$Month: Mar
x freq
1 2012 1
2 2013 3
3 2014 2
>
Upvotes: 0
Reputation: 28826
A sql solution using sqldf
package:
library(sqldf)
sqldf("SELECT Year, Month, COUNT(*) as Freq
FROM df1
GROUP BY Year, Month")
Upvotes: 5
Reputation: 36
Considering @Ben answer, R would throw an error if df1
does not contain x
column. But it can be solved elegantly with paste
:
aggregate(paste(Year, Month) ~ Year + Month, data = df1, FUN = NROW)
Similarly, it can be generalized if more than two variables are used in grouping:
aggregate(paste(Year, Month, Day) ~ Year + Month + Day, data = df1, FUN = NROW)
Upvotes: 1
Reputation: 38500
If you want to include 0 counts for month-years that are missing in the data, you can use a little table
magic.
data.frame(with(df1, table(Year, Month)))
For example, the toy data.frame in the question, df1, contains no observations of January 2014.
df1
x Year Month
1 1 2012 Feb
2 2 2014 Feb
3 3 2013 Mar
4 4 2012 Jan
5 5 2014 Feb
6 6 2014 Feb
7 7 2012 Jan
8 8 2014 Feb
9 9 2013 Mar
10 10 2013 Jan
11 11 2013 Jan
12 12 2012 Jan
13 13 2014 Mar
14 14 2012 Mar
15 15 2013 Feb
16 16 2014 Feb
17 17 2014 Mar
18 18 2012 Jan
19 19 2013 Mar
20 20 2012 Jan
The base R aggregate
function does not return an observation for January 2014.
aggregate(x ~ Year + Month, data = df1, FUN = length)
Year Month x
1 2012 Feb 1
2 2013 Feb 1
3 2014 Feb 5
4 2012 Jan 5
5 2013 Jan 2
6 2012 Mar 1
7 2013 Mar 3
8 2014 Mar 2
If you would like an observation of this month-year with 0 as the count, then the above code will return a data.frame with counts for all month-year combinations:
data.frame(with(df1, table(Year, Month)))
Year Month Freq
1 2012 Feb 1
2 2013 Feb 1
3 2014 Feb 5
4 2012 Jan 5
5 2013 Jan 2
6 2014 Jan 0
7 2012 Mar 1
8 2013 Mar 3
9 2014 Mar 2
Upvotes: 8
Reputation: 335
For my aggregations I usually end up wanting to see mean and "how big is this group" (a.k.a. length). So this is my handy snippet for those occasions;
agg.mean <- aggregate(columnToMean ~ columnToAggregateOn1*columnToAggregateOn2, yourDataFrame, FUN="mean")
agg.count <- aggregate(columnToMean ~ columnToAggregateOn1*columnToAggregateOn2, yourDataFrame, FUN="length")
aggcount <- agg.count$columnToMean
agg <- cbind(aggcount, agg.mean)
Upvotes: 4
Reputation: 115382
An old question without a data.table
solution. So here goes...
Using .N
library(data.table)
DT <- data.table(df)
DT[, .N, by = list(year, month)]
Upvotes: 44
Reputation: 42283
Following @Joshua's suggestion, here's one way you might count the number of observations in your df
dataframe where Year
= 2007 and Month
= Nov (assuming they are columns):
nrow(df[,df$YEAR == 2007 & df$Month == "Nov"])
and with aggregate
, following @GregSnow:
aggregate(x ~ Year + Month, data = df, FUN = length)
Upvotes: 86
Reputation: 19454
An alternative to the aggregate()
function in this case would be table()
with as.data.frame()
, which would also indicate which combinations of Year and Month are associated with zero occurrences
df<-data.frame(x=rep(1:6,rep(c(1,2,3),2)),year=1993:2004,month=c(1,1:11))
myAns<-as.data.frame(table(df[,c("year","month")]))
And without the zero-occurring combinations
myAns[which(myAns$Freq>0),]
Upvotes: 20
Reputation: 49640
The simple option to use with aggregate
is the length
function which will give you the length of the vector in the subset. Sometimes a little more robust is to use function(x) sum( !is.na(x) )
.
Upvotes: 25