user969113
user969113

Reputation: 2429

post-hoc tests: pairwise.t.test versus TukeyHSD test

I created the following example to compare the two functions pairwise.t.test() and TukeyHSD()

x <- c(10,11,15,8,16,12,20)
y <- c(10,14,18,25,28,30,35)
z <- c(14,19,35,18,17,16,25)

d <- c(x,y,z)
f <- as.factor(c(rep("a",7), rep("b",7), rep("c",7)))

pairwise.t.test(d, f)
TukeyHSD(aov(d ~ f))

Is it normal that the p-values differ like that for these two tests? Is there a way to adjust parameters in both or one test(s) to make the p-values more equal?

Also, it seems that there is no parameter var.equal as it is the case for the t.test() for both tests. Is that really true?

Upvotes: 0

Views: 4561

Answers (2)

dcarlson
dcarlson

Reputation: 11056

pairwise.t.test adjusts the p-values to adjust for multiple comparisons according to one of six methods (see ?p.adjust for details). To get separate standard deviation estimates instead of a pooled standard deviation, use the pool.SD=FALSE argument. There is no comparable option in analysis of variance which is what you are passing to the TukeyHSD() function.

Upvotes: 2

seancarmody
seancarmody

Reputation: 6290

From the help page for TukeyHSD:

When comparing the means for the levels of a factor in an analysis of variance, a simple comparison using t-tests will inflate the probability of declaring a significant difference when it is not in fact present. This because the intervals are calculated with a given coverage probability for each interval but the interpretation of the coverage is usually with respect to the entire family of intervals.

The TukeyHSD test is a different test and, based on the the comments above, I would expect in general that it would give higher p-values. Having said that, for the data you supplied the p-values don't look dramatically different to me for inference purposes.

Upvotes: 2

Related Questions