Mark the Words: Unexpected results

I think it's useful to point out the weird behaviour I and my colleague encountered.

With the MtW content type we created a sort of a multiple choice activity. It has 8 items, with 3 options each.

What comes to the evidence is that (see attached files) that only when I do the exercise perfectly I get the correct result (8 out of 8), but if I'm somewhat wrong, the results vary "randomly" (e.g. with 4 wrong answers, in 2 different shots, we got 1 out of 8 and 0 out of 8 - with one mistake, my score was 5 out of 8).

Thanks for your attention,

Akud & MatLa

fnoks's picture

Hi,

The calculation of scores in MTW (and some other content types) is the number of correct words minus the number of incorrect ones. If the incorrect ones were not subtracted, a user could click all the words and get full score :(

I think this calculation should have been explained somehow. Do you have any ideas on how that could be done?

 

akud's picture

Thanks for clarifying us on this mechanism. We now see how MtW works and we see the need of facing an unaccountability problem. 

By this, we could add a sort of explanation (a rough idea, clearly modifiable):

"To prevent unaccountable performances, this type of exercise is based on a 2 step concept.
First, the final score is the algebraic sum of right and wrong answers. For example, this means that if you have 10 items, half of which are right and half wrong, this equals to a final score of 0 points (since 5-5=0).
Second, the items that are left blank don't come to the computation of the score. So, if in the same example you don't answer to 3 items, and the other 4 are right while 3 are wrong, the final score will be 1 point."

As second language teachers, nonetheless, we guess we're having problems since the explanation is quite a long one and could be longer than the assignment or, sometimes, than the whole exercise too. More important, maybe, depending on the student's competence in the second language, the explanation could be too difficult to be understood.
In general, what we mean is that there is a misproportion between the assignment, the "ad hoc explanation" and the activity itself, being the second so big.
In our experience, we are afraid it's no much use in explaining the whole thing.

But we are thankful anyway, and we hope we can continue the discussion.

fnoks's picture

Thanks for you thoughts. There is an implementation for this in Drag Question (which has the same calculation):

What do you think of this addition? (This is not released yet)

Sounds very good. Much better than my long&prosaic explanation.

I'd like that to be more in plain sight.

So synthetic, it could be part of the assignment (or of a "note" field).

Thanks,

g.

 

fnoks's picture

Thanks for your input.

Since this functionality has already been implemented and tested, I do not think we will rework this now (unless our user testing tells us this is not working).