Reputation: 9803
library(Hmisc)
#10% difference
n1 = 30
n2 = 30
n = 60
p1 = seq(0.1, 0.9, 0.1)
p2 = p1 + 0.1
> bpower(p1, p2, n, n1, n2, alpha = 0.05)
Power1 Power2 Power3 Power4 Power5 Power6 Power7 Power8 Power9
0.9997976 0.9992461 0.9933829 0.9670958 0.8995984 0.7799309 0.6141349 0.4211642 0.2252629
#20% difference
n1 = 30
n2 = 30
n = 60
p1 = seq(0.1, 0.8, 0.1)
p2 = p1 + 0.2
> bpower(p1, p2, n, n1, n2, alpha = 0.05)
Power1 Power2 Power3 Power4 Power5 Power6 Power7 Power8
0.9997976 0.9992461 0.9933829 0.9670958 0.8995984 0.7799309 0.6141349 0.4211642
Here I'm using the bpower
function in Hmisc
to calculate the power of a two-sample binomial test. My hypotheses are: H0: p1 = p2 vs. H1: p1 != p2. In the first case, the sample proportions differ by 0.1 (i.e p2 - p1 = 0.1), and in the second case, the sample proportions differ by 0.2 (p2 - p1 = 0.2). However, when I calculate the power for the two cases, the values are exactly the same? Did I make a mistake in my code?
Upvotes: 0
Views: 392
Reputation: 206536
The signature for the function is
args(bpower)
# function (p1, p2, odds.ratio, percent.reduction, n, n1, n2, alpha = 0.05)
so if unnamed, the third parameter will be interpreted as the odds ratio. So yes, you made a mistake in your code. You should explicitly name your parameters to avoid this problem.
bpower(p1, p2, n=n, n1=n1, n2=n2, alpha = 0.05)
Using this, I get
# diff 0.1
Power1 Power2 Power3 Power4 ..
0.1893951 0.1437292 0.1268406 0.1204777 ...
and
# diff 0.2
Power1 Power2 Power3 Power4 ...
0.4903583 0.3912451 0.3495370 0.3376908
Upvotes: 1