A couple of years ago I played around with the Sørensen–Dice coefficient algorithm to innovate simple multiple-choice challenges for terms vs. definition types of questions. This creative approach works when users need to memorize a definition or explicitly recall a product or feature description.
I’ve expanded this approach to build a prototype for the Articulate eLearning Heroes challenge this week. Let’s assume you’re learning about Amazon AWS terms and definitions. Your goal is to be able to recall the short definition for each term (before you’d dive into learning more about them and, more importantly, have hands-on practice).
You could create many multiple-choice questions like this:
Some disadvantages of this approach:
- You need to manually create the choices for each term. With over 100 terms, it is a long manual process.
- If the items change, you have to go back and adjust where they were used. Error-prone process.
- For the user, it is more of a recognition exercise. I can figure out which one is right but I wouldn’t be able to repeat the definition to someone.
Let’s look at a better approach!
In this example, we’ll follow some guiding principles:
- We separate data and logic. All terms and definitions will be stored in an external XML file. If the terms change, you just updated the file. No need to fiddle with Storyline.
- No duplications. In this example, we use the single-source approach. A term and its definition will not be duplicated or hard-coded anywhere. There is only one single instance for each in the XML.
- No hard-coded multiple-choice questions. We will not create ANY multiple-choice options for any of the terms. It will be generated automatically, on the fly.
- We’ll measure two things: accuracy and mastery. Accuracy (%) will show the user’s performance on selecting the right multiple-choice item. Mastery (%) will show the user’s performance on how closely they are able to recall the correct definition.
Prototype
Launch the example or read on to learn how to use it.
What are the three different levels?
TRAINING LEVEL – In this mode, you will face the traditional multiple-choice approach. The system will pick a random term with four potential definitions, in which, one is correct.
The training level measures Accuracy only (whether you picked the right option or not). The consequences of your choice will be revealed before you commit to it, so you can always go back and change your mind. This is training.
ADVANCED LEVEL – In this level, you will face a random term, and you will need to recall the definition as closely as you can in a free-text form. If you don’t know the answer, you can just guess. This mental effort (even guessing) leads to better retention than just picking a choice.
How does this level work?
You type in your best guess what the selected term is. Let’s assume you’re somewhat close to the correct definition. When you submit your guess, the system will compare your answer (using Sorenson-Dice) to all other definitions in the XML file, and pick the four items that are closest to your guess. And now, you have a multiple-choice question.
Note that there are no hard-coded options here. Whatever you type in will determine the options for the next round. For example, this is the correct answer to Amazon Pinpoint:
Let’s alter it a little to make it realistic for someone who remembers most of it but not all:
Submitting this guess leads to a multiple-choice question:
Noticed how similar all the options are? It’s not the easiest challenge if you’re not sure about the definition. All options were determined by the algorithm to be the closest to what your guess was. Again, you don’t need to build lots of questions. One slide.
Now, let’s say you select the correct answer (by recognizing the answer this time).
You’ll see that with this choice, your accuracy will go up 100% since this is correct. There are 5 questions in this prototype. Answering a question by correctly typing in the exact same definition gives you 20% mastery each (5 x 20 = 100% max). You see that our definition was somewhat close to the real thing, therefore we get 15% instead of 20%. This is all automatic. No hard-coded numbers.
With the advanced level, you can now measure not only how many terms users get right, but also what definitions may cause problems. For example, you could connect this to an LRS via xAPI and analyze the data.
What if I type in something wild?
Good question. What about typing in a completely wild guess? Since the system picks four options close to what you type in, none of the answers would be correct?
Well, in the ADVANCED level, you will always have the correct answer within the four options. This is why it’s called “guided mastery.” If you type in a wild guess, and the four picks do not include the correct one, the system will replace one of the incorrect options with the correct one. It is guaranteed that one of them will be right.
That leads us to a dilemma: isn’t that a way of cheating? Typing in something unrelated so three of the choices will obviously close to what you typed but the one that is very different, that is the correct answer?
Yes. This equates to a badly written multiple-choice where the answer is obvious (longest, for example). However, remember, we’re not just measuring accuracy by selecting the right choice. We also measure mastery. And so, your intentionally bad answer may help you pick the correct choice but your mastery score will dramatically drop! Why? Because your guess and the correct answer are not similar.
Finally, let’s look at the MASTERY level.
The MASTERY level is very similar to the ADVANCED level. There are two significant differences:
A) You won’t see the potential results after selecting a choice anymore. You need to trust your knowledge.
B) In this level, the correct option may not be in the four options. In other words, while the ADVANCED level guaranteed that the correct option, even if your guess didn’t resemble to it at all, in the MASTER level that “guide” is gone. You get what you ask for. If your guess is wrong, your options will be wrong as well.
That means you may end up with four incorrect choices in some cases. This level is best for tests or certification where users are assumed to know the answers.
The cheat-chat icon above would not be part of a real solution. This is just here for you to paste the correct answer and alter it as you wish to see how the prototype works. In other words, it just saves you the time from Googling the terms.
What does the Storyline file look like under the hood?
Overall, the prototype includes one slide.
And that single slide has 8 layers. The layers cover the three different types of challenges: training, advanced, and mastery.
The Sørensen–Dice coefficient algorithm with the complete logic dealing with the XML data is in a webobject. Storyline simply communicates with the webobject. My intention was to build out this webobject in Construct 3 game engine. Once created and published, it’s just an HTML5 content folder. It handles all the data processing and communication. I added an interface so you can just call a function from Storyline and tell the game engine how many multiple-choice questions to create, for example (it is 5 in the example). The interface then sends back the four options for each round with the term. Also returns the Sorenson-Dice index (0-1) how similar your answer is to the correct one.
Since the challenge (eLearning Heroes challenge: using a variable to compare user’s answers to an expert’s recommendation ) was to show how you can use a variable and compare it to an expert’s recommendation, the part that is relevant from the example has the following workflow:
How does Storyline communicates with the webobject?
In my examples, there is one embedded webobject, hence frames[0] referring to the game engine’s iframe. Inside the game engine, I created a pickCurrentTerm function. The function deals with the XML data and picks the current term for the multiple choice. The other arguments are Storyline variables.
Term – variable name in Storyline to store the name of the picked term.
ActionNum – variable to change to indicate Storyline that we have the pick, it can move on.
ActionCode – variable that contains the return code. Basically, the way the game engine lets Storyline know how did the function go, what the result was.
“beginner_advancepick” – text we want the game engine to set the ActionCode variable to when all done.
window.frames[0].c3_callFunction(“pickCurrentTerm”,[“Term”,”ActionNum”,”ActionCode”,”beginner_advancepick”]);
Basically, there is a trigger in Storyline that runs when ActionNum changes (it is 1 or -1 and we always flip it to the other). This is just so we can capture triggers. When ActionNum changes, a lot of different things could be triggered. To know what should happen we use a condition in those triggers to look at the ActionCode. The ActionCode tells us what to run and what not to run.
For example:
When ActionNum changes (meaning the game engine indicates that it’s done doing whatever it was supposed to), then two JS triggers may run. The first one runs only if the returned ActionCode is “trainingpick.” The second JS trigger runs ever time ActionNum changes because it does not have a condition set.
What else is this good for?
Another creative use of this on-the-fly text comparison could be used for selecting similar ideas. This would be more appropriate for AI but let’s assume people submit ideas or reflections on a question. You could then run the Sørensen–Dice coefficient algorithm to compare the user’s reflection against other ones, and return a selected few that are similar. OR, the opposite. You can return the ones that are different prompting the user to think about their own.
TL;DR: So again, the intention was: if you see this useful “as is,” you can use it for own purposes. And this is where the story turns sad. For whatever reason, today my source files for the game engine in dropbox are gone. I have no idea what happened. Construct 3 game engine saves the source directly in dropbox in the cloud. It’s been working for a year now. And then suddenly, today, all gone. No history. Not even deleted files or the folder… At this point I all have is the published game and it has a bug. I can’t fix it unless I somehow find a way to retrieve the dropbox files. And yes, I should have downloaded it and made a copy.
If you want to play with the prototype, launch it. Remember, you can use the cheat chat icon to paste the correct answer into the text box. But if you feel adventurous, you can try it on your own.
Leave a Comment