Anna_70
Anna_70

Reputation: 91

P for trend calculation in R on betas in linear regression

I have a question about calculating a p for trend based on betas from linear regression. I have created some data using R:

id <- c(1,2,3,4,5,6,7,8,9,10)
var1 <- c(60,80,90,55,60,61,77,67,88,90)
var2 <- c(55,88,88,55,70,61,80,66,65,92)
var3 <- c(62,88,85,56,68,62,89,62,70,99)
outcome <- c(1,5,3,7,3,9,6,3,2,6)
dat <- data.frame(id, var1, var2, var3, outcome)
dat

mod1 <- lm(outcome ~ var1, data = dat)
summary(mod1)           
# Beta =  -0.03100

mod2 <- lm(outcome ~ var2, data = dat)
summary(mod2)     
# Beta =  0.01304

mod3 <- lm(outcome ~ var3, data = dat)
summary(mod3) 
# Beta =  0.01544

So based on the betas, it looks like there is some kind of trend. I know I can check this by calculating a p for trend. However, I am very new to statistics and I do not know how to calculate this p for trend. Can somebody help me by giving me a push in the right direction?

Upvotes: 1

Views: 3233

Answers (1)

bobbel
bobbel

Reputation: 2031

If you use summary on a lm fitted model, the output should normally include p-values for the predictors of the model.

For example, when I run the command summary(mod1) I get the following output:

Call:
lm(formula = outcome ~ var1, data = dat)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.8968 -1.8426 -0.1218  1.8687  4.1342 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  6.75690    4.68435   1.442    0.187
var1        -0.03100    0.06333  -0.490    0.638

Residual standard error: 2.619 on 8 degrees of freedom
Multiple R-squared:  0.02908,   Adjusted R-squared:  -0.09228 
F-statistic: 0.2396 on 1 and 8 DF,  p-value: 0.6376

So in the rightmost column (Pr(>|t|)) the p-values for each of the predictors is given (including the intercept). This shows that the estimate var1 is actually not significantly different from 0, with p=0.638 (and the same for var2 and var3).

Upvotes: 1

Related Questions