Reputation: 1673
What I'd like to do:
I would like to use r-exams
in the following procedure:
exams2pdf(..)
eval_nops(...)
)My Question:
Is calling the function eval_nops()
the preferred way to manually grad questions in r-exams
?
If not, which way is to be prefered?
What I have tried:
I'm aware of the exam2nops()
function, and I know that it gives back an .RDS
file where the correct answers are stored. Hence, I basically have what I need. However, I found that procedure to be not very straightforward, as the correct answers are buried rather deeply inside the RDS file.
Upvotes: 4
Views: 375
Reputation: 17183
You are right that there is no readily available system for administering/grading exams outside of a standard learning management system (LMS) like Moodle or Canvas, etc. R/exams does provide some building blocks for the grading, though, especially exams_eval()
. This can be complemented with tools like Google forms etc. Below I start with the "hard facts" regarding exams_eval()
even though this is a bit technical. But then I also provide some comments regarding such approaches.
exams_eval()
Let us consider a concrete example
eval <- exams_eval(partial = TRUE, negative = FALSE, rule = "false2")
indicating that you want partial credits for multiple-choice exercises but the overall points per item must not become negative. A correctly ticked box yields 1/#correct points and an incorrectly ticked box 1/#false. The only exception is where there is only one false item (which would then cancel all points) then 1/2 is used.
The resulting object eval
is a list with the input parameters (partial
, negative
, rule
) and three functions checkanswer()
, pointvec()
, pointsum()
. Imagine that you have the correct answer pattern
cor <- "10100"
The associated points for correctly and incorrectly ticked boxed would be:
eval$pointvec(cor)
## pos neg
## 0.5000000 -0.3333333
Thus, for the following answer pattern you get:
ans <- "11100"
eval$checkanswer(cor, ans)
## [1] 1 -1 1 0 0
eval$pointsum(cor, ans)
## [1] 0.6666667
The latter would still need to be multiplied with the overall points assigned to that exercise. For numeric answers you can only get 100% or 0%:
eval$pointsum(1.23, 1.25, tolerance = 0.05)
## [1] 1
eval$pointsum(1.23, 1.25, tolerance = 0.01)
## [1] 0
Similarly, string answers are either correct or false:
eval$pointsum("foo", "foo")
## [1] 1
eval$pointsum("foo", "bar")
## [1] 0
To obtain the relevant pieces of information for a given exercise, you can access the metainformation from the nested list that all exams2xyz()
interfaces return:
x <- exams2xyz(...)
For example, you can then extract the metainfo
for the i
-th random replication of the j
-th exercise as:
x[[i]][[j]]$metainfo
This contains the correct $solution
, the $type
, and also the $tolerance
etc. Sure, this is somewhat long and inconvenient to type interactively but should be easy enough to cycle through programatically. This is what nops_eval()
for example does base on the .rds
file containing exactly the information in x
.
My usual advice here is to try to leverage your university's services (if available, of course). Yes, there can be problems with the bandwidth/stability etc. but you can have all of the same if you're running your own system (been there, done that). Specifically, a discussion of Moodle vs. PDF exams mailed around is available here:
If I were to provide my exams outside of an LMS I would use HTML, though, and not PDF. In HTML it is much easier to embed additional information (data, links, etc.) than in PDF. Also HTML can be viewed on mobile device moch more easily.
For collecting the answers, some R/exams users employ Google forms, see e.g.:
https://R-Forge.R-project.org/forum/forum.php?thread_id=34076&forum_id=4377&group_id=1337. Others have been interested in using learnr
or webex
for that:
http://www.R-exams.org/general/distancelearning/#going-forward.
Regarding privacy, though, I would be very surprised if any of these are better than using the university's LMS.
Upvotes: 5