Reputation: 439
I am preforming a fisher's exact test on a large number of contingency tables and saving the p-val for a bioinformatics problem. Some of these contingency tables are large so I've increased the workspace as much as I can; but when I run the following code I get an error:
result <- fisher.test(data,workspace=2e9)
LDSTP is too small for this problem. Try increasing the size of the workspace.
if I increase the size of the workspace I get another error:
result <- fisher.test(data,workspace=2e10)
cannot allocate memory block of size 134217728Tb
Now I could just simulate pvals:
result <- fisher.test(data, simulate.p.value = TRUE, B = 1e5)
but Im afraid Ill need a huge number of simulations to get accurate results since my pvals may be extremely small in some cases.
Thus my question whether there is some way to preemptively check if a contingency table is too complex to calculate exactly? In those cases alone I could switch to using a large number of simulations with B=1e10 or something. Or at least just skip those tables with a value of "NA" so that my job actually finishes?
Upvotes: 3
Views: 4948
Reputation: 1860
Maybe you colud use tryCatch
to get desired behaviour when fisher.test
fails? Something like this maybe:
tryCatchFisher<-function(...){
tryCatch(fisher.test(...)$p.value,
error = function(e) {'too big'})
}
Upvotes: 3