Since this is an educational game, we need to export what the player’s been doing with it. In user testing we want to see if reaction times look right, if players are doing things that make sense, if they’re doing random things, etc.
Columns we need:
- studentID [have that linked to school, grade, etc.]
- clicks on words
- movement of words: error or correct? [this part may need some finessing in how the output file is structured]
- what order it was done in in the scene [post-process things that have been before it too?]
- version number for game / scoring scene type
- what kind of device they’re using [screen size, etc, touch screen vs. mouse]
Transactional data – activity log – map activity log to data that exists elsewhere on curric (domain model), map it to demographics (enrich student model), map to teaching strategies (pedagogical model)
- multiple attempts on the same thing? infer from timestamp, but in post-processing number the attempts
- critical thing to infer about quality & efficiency of learning curve – learning curve analysis (#todo – read paper ab)
- counting attempts on the same thing – waht does it mean for a thing to be the same – waht are the cirical equivs and non equivs – are all short “u”s “the same” ? –> discover that from data, use that discovery to make instruction assessment more efficient and accurate [try diff groupings of things – see if errors on one kind of thing are different than errors on other kinds of things ]
- I could get to “the true” version for the domain
- errors of commission, errors of omission – distinguish the two
- can do that in post-processing also, store original input, store how I scored it, maybe I
- is it useful to keep the words in the notebook/Trash?