Reputation: 4102
I have the following dataframe:
df1<- structure(list(Site = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), .Label = c("ALT01"), class = "factor"), Nets = 1:18, Cumulative.spp = c(12L,13L, 15L, 17L, 17L, 17L, 17L, 19L, 19L, 19L, 19L, 20L, 22L, 22L, 22L, 22L, 22L, 22L)), .Names = c("Site", "Nets", "Cumulative.spp"), row.names = c(NA, 18L), class = "data.frame")
and I am trying to get a ggplot2 plot with the geom_smooth response of this function:
Model1<-nls(Cumulative.spp ~ SSasympOff(Nets, A, lrc, c0), data = df1)
Typically if I had a model like this:
Model2 <- lm(Cumulative.spp ~ I(log(Nets), data = df1)
I tried two methods
Method 1
I would do this:
library(ggplot2)
ggplot(df1, aes(x=Nets, y = Cumulative.spp)) + geom_point() + geom_smooth(method="lm", formula=y~log(x), fill="blue", fullrange=T)
but when I try to do the same with the assymptote it does not work:
ggplot(df1, aes(x=Nets, y = Cumulative.spp)) + geom_point() + geom_smooth(method="nls", formula=y~SSasympOff(x, A, lrc, c0), color="blue", fullrange=T)
but I got this error and this plot:
Warning message:
Computation failed in `stat_smooth()`:
$ operator is invalid for atomic vectors
Method2
I tried predicting over the original dataframe to get a confidence interval and using geom_line
over the predicted values and geom_ribbon
on the interval, but when I do
predict(Model1, df1, interval = "confidence")
but I do not get the confidence interval, only the predicted values
any help would be appreciated
Upvotes: 1
Views: 2338
Reputation: 4102
I found this way to do it, which I think is a bit more simple than bouncyballs one, but I will let all of you judge the better answer, although I will upvote bouncyball's answer.
I found this old post about it, but I couldn't find the as.lm function in nls2, the I found this link with the function, and I decided to use the as.lm.nls
function
ggplot(df1, aes(x=Nets, y = Cumulative.spp)) + geom_point() + geom_line(y = predict(as.lm.nls(Model1), interval = "confidence")[,1]) + geom_ribbon(ymax = predict(as.lm.nls(Model1), interval = "confidence")[,3], ymin = predict(as.lm.nls(Model1), interval = "confidence")[,2], fill = "blue", alpha = 0.5)
and I got this result faster than using the bootstrap approach
Upvotes: 0
Reputation: 10771
I thought since I suggested a bootstrap method I might demonstrate. In this case we'll be boostrapping the residuals (see Wikipedia for more information). I'm not too familiar with using nls
, so someone may come along with a (valid) theoretical objection.
B <- 2500 # bootstrap iterations, big number
pred_mat <- matrix(0, nrow = 18, ncol = B) # initialize matrix
# extract residuals and predictions from original model
resids <- residuals(Model1)
preds <- predict(Model1)
df1$Pred <- preds
for(i in 1:B){
# bootstrapped dependent variable
new_y <- preds + sample(resids, replace = TRUE)
df1$NewY <- new_y
# fit model
Model_Boot <- nls(NewY ~ SSasympOff(Nets, A, lrc, c0), data = df1)
# extract predictions
pred_mat[,i] <- predict(Model_Boot)
}
# add 2.5% and 97.5% percentile intervals to df1
df1 <- cbind(df1, t(apply(pred_mat, 1, FUN = function(x) quantile(x, c(.025, .975)))))
# rename appropriately
names(df1)[6:7] <- c('lower','upper')
# make plot
ggplot(df1, aes(x = Nets))+
geom_point(aes(y = Cumulative.spp))+
geom_line(aes(y = Pred))+
geom_ribbon(aes(ymin = lower, ymax = upper),
alpha = .2, fill = 'blue')
Upvotes: 1