Reputation: 30301
When I run unit test locally for caretEnsemble, I get some tests skips. 3 of them are expected, 2 of them are not:
Rscript -e "Sys.setenv(NOT_CRAN='true'); devtools::test(stop_on_failure=TRUE, stop_on_warning=TRUE)"
ℹ Testing caretEnsemble
✔ | F W S OK | Context
✔ | 3 0 | Test skips are working correctly
✔ | 279 | Ancillary caretList functions and errors [1.8s]
⠏ | 0 | We can fit models with a mix of methodList and tuneList ✔ | 4 | We can fit models with a mix of methodList and tuneList [2.6s]
✔ | 40 | We can handle different CV methods [11.9s]
✔ | 31 | Classification models [9.6s]
✔ | 4 | User tuneTest parameters are respected and model is ensembled [1.4s]
✔ | 4 | Formula interface for caretList works [2.0s]
✔ | 4 | Regression models [1.4s]
✔ | 1 10 | Does stacking and prediction work? [2.3s]
✔ | 12 | Prediction errors for caretStack work as expected [1.8s]
✔ | 25 | Do classifier predictions use the correct target classes? [1.9s]
✔ | 1 | Test errors and warnings
✔ | 8 | Test metric and residual extraction [1.4s]
✔ | 20 | Does ensembling and prediction work? [1.3s]
✔ | 3 | Does ensembling work with models with differing predictors
✔ | 16 | Does ensemble prediction work with new data [1.2s]
✔ | 15 | Do ensembles of custom models work?
✔ | 1 18 | Does variable importance work? [12.2s]
✔ | 16 | Do metric extraction functions work as expected [1.1s]
✔ | 18 | Testing caretEnsemble generics [2.3s]
✔ | 13 | Residual extraction [1.2s]
✔ | 22 | Are ensembles construct accurately [3.8s]
✔ | 10 | Do the helper functions work for regression objects? [1.4s]
✔ | 12 | Do the helper functions work for classification objects?
✔ | 22 | Test weighted standard deviations [1.8s]
✔ | 3 | Parallelization works [1.2s]
✔ | 12 | Ancillary caretList S3 Generic Functions Extensions [3.3s]
How do I make testthat or devtools tell me WHICH tests it is skipping (and why)?
(edit) follow up question: I used a binary search to find one of the skipped tests, and it is:
test_that("caretStack plots", {
test_plot_file <- "caretEnsemble_test_plots.png"
ens.reg <- caretStack(
models.reg,
method = "gbm", tuneLength = 2, verbose = FALSE,
trControl = trainControl(number = 2, allowParallel = FALSE)
)
png(filename = test_plot_file)
plot(ens.reg)
dotplot(ens.reg, metric = "RMSE")
dev.off()
unlink(test_plot_file)
})
There's no skip statement at all there— why would testthat skip this test?
(edit 2): Ok testthat skips tests if there's no "expects" in them. Whats the best way to test that a plot is correctly generated? Capture the file and check that it exists?
Upvotes: 0
Views: 30