JoeN
JoeN

Reputation: 57

How do I extract estimates and standard errors as a measure of linear increment from an lm model in R?

Let's say I have data:*

data = data.frame(xdata = 1:10, ydata = 6:15)

I look at the data

data

  xdata ydata
1      1     6
2      2     7
3      3     8
4      4     9
5      5    10
6      6    11
7      7    12
8      8    13
9      9    14
10    10    15

Now I want to include a third column to the data which should be an increment/estimate and a fourth column we should be standard errors. To do this, I estimate the increment for each row of the data by fitting a linear model and taking the slope/estimate and also the associated standard error. So I fit model_1:

model_1 = lm(ydata~xdata,data = data)
out = summary(model_1)
out

It gives me:

Call:
lm(formula = ydata ~ xdata, data = data)

Residuals:
       Min         1Q     Median         3Q        Max 
-5.661e-16 -1.157e-16  4.273e-17  2.153e-16  4.167e-16 

Coefficients:
             Estimate Std. Error   t value Pr(>|t|)    
(Intercept) 5.000e+00  2.458e-16 2.035e+16   <2e-16 ***
xdata       1.000e+00  3.961e-17 2.525e+16   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.598e-16 on 8 degrees of freedom
Multiple R-squared:      1, Adjusted R-squared:      1 
F-statistic: 6.374e+32 on 1 and 8 DF,  p-value: < 2.2e-16

To extract the estimate, I can simply do:

out$coefficients[2,1]

To extract the standard error, I can simply do:

out$coefficients[2,2]

but my interest is to have an output that shows estimates and standard errors for each row so that I end up with 10 estimates and 10 standard errors. Is there a way to do this?

Many thanks!

Upvotes: 1

Views: 970

Answers (1)

dc37
dc37

Reputation: 16178

Basically, your lm model is of the formula y = Intercept + x*coefficient. So, you can calculate the estimate based on the output of the summary(lm(...

So, if you take the following example:

set.seed(123)
vector1 = rnorm(100, mean = 4)
vector2 = rnorm(100, mean = 1)
dat = data.frame(vector1,vector2)
model_dat = lm(vector2 ~ vector1, data = dat)
Model = summary(model_dat)

And now, you can calculate the estimate:

dat$estimate = dat$vector1 * Model$coefficients[2,1] + Model$coefficients[1,1]

And then for the standard error, you can use predict.lm with the function se.fit = TRUE:

dat$SE = predict.lm(model_dat, se.fit = TRUE, level = 0.95)$se.fit

So, you get the following dataset:

> head(dat)
   vector1    vector2  estimate         SE
1 3.439524 0.28959344 0.9266060 0.11942447
2 3.769823 1.25688371 0.9092747 0.10294104
3 5.558708 0.75330812 0.8154090 0.18452625
4 4.070508 0.65245740 0.8934973 0.09709476
5 4.129288 0.04838143 0.8904130 0.09716038
6 5.715065 0.95497228 0.8072047 0.19893259

You can compare the result of this by first, checking the plotting obtained using stat_smooth:

library(ggplot2)
ggplot(dat, aes(x = vector1, y = vector2)) + geom_point() + stat_smooth(method = "lm", se = TRUE)

And you get this plot: enter image description here

And if now, you use estimate and SE columns from your dat:

ggplot(dat, aes(x = vector1, y = vector2)) + geom_point() + 
  geom_line(aes(x = vector1, y = estimate), color = "red")+
  geom_line(aes(x = vector1, y = estimate+SE)) +
  geom_line(aes(x = vector1, y = estimate-SE)) 

You get almost the same plot: enter image description here

Hope that it answers your question

Upvotes: 1

Related Questions