Reputation: 18331
I have a list with hundreds of columns and rows. What I'm doing is looping through nearly every possible iteration of taking the difference between two columns. For example take the difference between 1st and 2nd column, 1st and 3rd column..1st and 500th column... 499th column and 500th column. Once I have those differences I compute some descriptive statistics (ie. mean, st dev, kurtosis, skewness, etc) for output. I know I can use lapply to calculate those statistics for each column individually but sd(x)-sd(y) <> sd(x-y) so it doesn't really cut down much on my looping. I can use avg(x)-avg(y)=avg(x-y) but that's the only statistic where I can use this property.
Here's some pseudo code that I have:
for (n1 in 1:(number of columns) {
for (n2 in n1:(number of columns) {
temp<-bigdata[n1]-bigdata[n2]
results[abc]<-(maxdrawdown,mean,skewness,kurtosis,count,st dev,
median, downsidedeviation)
}
}
Doing it this way can take literally days so I'm looking for some improvements. I'm already using Compiler with enableJIT(3)
which actually does make it noticeably faster. I had a couple other ideas and any incites would be helpful. One is trying to utilize the snowfall package (still trying to get my head around how to implement it) with the thought that one core could compute skew and kurtosis while the other computes the other statistics. The other idea is creating big chunks of temp (ie. 1-2, 1-3, 1-4) as another data.frame (or list) so as to use lapply against it to knock out many iterations at once. Would this make much of a difference? Is there anything else I can do that I'm not even thinking of?
Upvotes: 1
Views: 217
Reputation: 176638
A reproducible example would really help, because the way you describe your problem are confusing (e.g. lists don't have rows/columns). My guess is that bigdata
and results
are data.frames, in which case converting each of them to a matrix will make your loops appreciably faster.
Upvotes: 1
Reputation: 113
I don't know if it will be any faster, but the following might make the code a bit easier to read if not faster, although it should get a bit faster as well because you've eliminated the for() ...
.
Try using expand.grid()
, which I tend to use less often than I probably should
For instance:
nC <- 3 # Num of cols
nR <- 4 # Num of cols
indices <- expand.grid(nC, nC)
# Now you can use apply cleanly
apply(indices, 1,
function(x) {
c1 <- x[1]; c2 <- x[2]
yourResult[c1,c2] <- doYourThing(bigData[,c1], bigData[,c2])
}
)
Well, you get the idea. :-)
Upvotes: 0