Contact Us Support Forum Get Email Updates
 
 

Thanks! Someone will be in touch with you shortly.

Rather just email us? Email us here.
Rather speak with someone in person?
Call any time with Tin Can questions:

866.497.2676

Archive for the "Statements" Category

Learning plans, goals and targets are important. Setting goals for learning allows us to evaluate whether or not we are learning the things that we set out to learn. It’s standard practice for e-learning courses and qualifications to have learning outcomes attached to them, and these are used to measure if our learning has been successful. They are also used by educators and trainers to evaluate whether or not their teaching and training have been effective, and are used to inform interventions, further learning requirements and amendments to learning materials and lesson plans.

Learning Goals with Tin Can

Brian Miller touched on the use of sub-statements in Tin Can to represent future plans. The spec puts it this way: “One interesting use of sub-statements is in creating statements of intention.” and gives the following example:

{
    "actor": {
        "objectType": "Agent",
        "mbox":"mailto:test@example.com"
    },
    "verb" : {
        "id":"http://example.com/planned",
        "display": {
            "en-US":"planned"
        }
    },
    "object": {
        "objectType": "SubStatement",
        "actor" : {
            "objectType": "Agent",
            "mbox":"mailto:test@example.com"
        },
        "verb" : {
            "id":"http://example.com/visited",
            "display": {
                "en-US":"will visit"
            }
        },
        "object": {
            "id":"http://example.com/website",
            "definition": {
                "name" : {
                    "en-US":"Some Awesome Website"
                }
            }
        }
    }
}

MORE…

No Comments
 

What’s the difference between session duration and attempt duration? Timestamp or Stored? When should you record time taken and how can you report it? This series of blogs looks specifically at duration and how to report on it.

As a provider of an LMS, LRS or other system launching content and reporting on duration information, you can use the table presented last week as a guide for reporting. In an ideal world, you can simply look at the Result Duration property of the completed/passed/failed, suspended and terminated statements to grab your attempt and session durations. Win!

Handling limited data

Unfortunately, the world is not an ideal place. In practice, many Activity Providers have not implemented duration at all, or are only reporting duration at activity completion, leaving the report viewer wondering about the time spent by learners on partially completed attempts. Many early adopters, who designed their statements before the best practice I described last week emerged, are understandably waiting for the release of CMI5 before updating their statement structure.

As an LMS provider that leaves you with two options:

  1. Encourage your activity providers to improve the data they’re sending (point them to this blog series).

  2. Work with the data they provide or you can gather yourself.

Working with the data you have most likely means using Timestamp to calculate duration. For session duration, you can simply take the first and last statements issued in a session and subtract! The harder part is working out the break points between sessions, especially if the learner re-launches the experience soon after leaving it. The following guidelines will help:

  • As the LMS launching the experience, you should know when the session started. In fact it’s good practice for the LMS itself to issue a statement using the verb http://adlnet.gov/expapi/verbs/launched to indicate that it launched the experience. This means that even if the Activity Provider never issues a single statement, you know when experience was launched. This is essential for reporting if the experience can be launched from multiple systems and the Activity Provider is not sending the data you need.

  • When the learner has launched the experience again, you can assume that the previous session ended at about the time of the previous statement before that new launch.

  • When the learner hasn’t launched the experience again, you can assume that either the session is still in progress, or the last statement issued represents the end of the session.

  • To work out if the session is still in progress, you’ll need to define a session timeout period. If the activity provider is doing client side JavaScript tracking, then the LRS should define a timeout for the launch security credentials and you can use that same value. If not, define something sensible for the types of experience you’re launching. Any statements issued after the timeout period you define can be considered a new session.

Attempt duration can be harder or even impossible depending on what data the Activity Provider sends. If you can follow them, you can use the rules below in priority order depending on what data the Activity Provider sends:

  • If the Activity Provider sends a ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement with a Result Duration, then take this as the attempt duration. If more than one of these statements are sent, the latest one in a given attempt will represent the latest duration.

  • If the Activity Provider sends an ‘attempted’ statement with a Result Duration of zero then this marks the start of the attempt for the purposes of calculating attempt duration.

  • If the Activity Provider sends a ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement without a Result Duration, then then the latest of these within an attempt marks the end of that attempt. Add up the session durations of all sessions within that attempt.

  • Assume that the last statement (excluding ‘launch’ and ‘initialized’) before an ‘attempted’ statement with a Result Duration of zero was the last statement in that previous attempt.

  • If Result Duration is not used by an Activity Provider but they use the ‘attempted’ statement correctly, you can calculate the end of a previous attempt as the latest ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement before an ‘attempted’ statement.

  • If Result Duration is not used by an Activity Provider and they use the ‘attempted’ statement incorrectly, then it may not be possible to accurately track the start and end of an attempt. The only sensible solution here is either not report attempt duration for these activities or allow your administrators to configure how duration is reported on a per activity basis.

As you can see, reporting on limited data from Activity Providers is hard! This complexity can be avoided by Activity Providers sending the data as outlined last week. If they don’t and you really need to report on their data, we can help.

1 Comment
 

What’s the difference between session duration and attempt duration? Timestamp or Stored? When should you record time taken and how can you report it? This series of blogs will look specifically at duration and how to report on it.

The SCORM veterans at Rustici Software tell me that duration reporting by SCOs was notoriously patchy. That’s why you’ll notice that SCORM Cloud will tell you it’s calculation of duration alongside the figure reported by your e-learning. It can do this because the SCO sits inside a player – either a frame or a pop-up window – that allows SCORM Cloud to keep an eye on the SCO. Tin Can does away with the need for frames and pop-ups; that’s a really good thing for everybody but it does mean that activity providers need to be on the ball with reporting duration.

As an Activity Provider, if you want to provide good duration information you’ll need to issue statements for the events in the table below. There’s no universally agreed set of verb ids for this yet (CMI5 is still under development), but I’ve listed the verbs that I would use (and have used) when designing statements.

Event

Verb id

Comments

Entering the experience

http://adlnet.gov/expapi/verbs/initialized

The very first statement issued by the activity provider as soon as it can. I recommend including a Result Duration of zero (“PT0S”) with this statement.

Exiting the experience

http://adlnet.gov/expapi/verbs/terminated

Issued whenever the learner leaves the activity and includes the session duration as the value of Result Duration.

Starting an attempt

http://adlnet.gov/expapi/verbs/attempted

Issued at the start of a new attempt. You should always include a Result Duration of zero (“PT0S”) with this statement.

Suspending an attempt

http://adlnet.gov/expapi/verbs/suspended

Issued when the learner leaves mid-way through an attempt. Includes the current attempt duration as the value of Result Duration.

Resuming an attempt

http://adlnet.gov/expapi/verbs/resumed

Issued when the learner returns to an attempt that’s been suspended. You can optionally include the attempt duration tracked so far as the value of Result Duration with this statement.

Ending an attempt

http://adlnet.gov/expapi/verbs/passed

http://adlnet.gov/expapi/verbs/failed

http://adlnet.gov/expapi/verbs/completed

The attempt can be considered over when the learner passes, fails or completes the activity. These statements should include the final attempt duration as the value of Result Duration and a Result Completion of “true” to make clear that the attempt is complete.

The object of all these statements needs to be the main activity itself rather than something within that activity. You might also want to track duration for sub-activities such as modules, screens or interactions within an e-learning course too, but this is no substitute for tracking the duration of the activity as a whole.

Note that you should use the Context Registration property to link groups of statements together. The concept of registration is linked to the idea of a learner being enrolled on a course and can be a little complex. Sometimes a registration is equal to an attempt for all practical purposes, but that’s not always the case and single registration may contain multiple attempts.

In our golf prototype, for example, the user is given the option to return to a saved location or start again. If they choose to start again, that’s a new attempt but not a new registration. In our Tetris prototype each game of Tetris is a new attempt, but they all sit within a single registration. As a rule of thumb, if the LMS initiates a clear slate for the learner when launching the activity, that’s a new registration; if the learner has multiple attempts within the activity, it’s not.

What if the user closes the browser?

A common problem with SCORM was that if the user closed browser windows or got disconnected then tracking data could be lost. Tin Can solves this by allowing tracking to be performed server side so that data can be sent even after the user disconnects. It is still possible to do Tin Can with client side JavaScript though, and in this case the same problem remains.

For duration tracking, this means that the crucial final statements containing the duration information can be lost if the user closes the browser before they can be sent. There are a number of things activity providers can do to minimise this issue:

  • Make clear how learners can safely exit the course; provide some instructions and a clear save button at the top of the page. For example, the latest version of our golf prototype adds a ‘Save and Exit’ button to the navigation.

  • Consider the session over and issue statements when the learner gets to the end, not when they exit.

  • Launch the course in the same window rather than a new window and return the user to the LMS when they exit. This may require some collaboration with the LMS provider to achieve, but is a smoother experience for the user and discourages closing the window. The draft CMI5 specification includes a standard way for the LMS to provide a return URL to the activity provider.

I hope that clears up the questions you had around how to track duration and helps us all to do Tin Can better. If you do have further questions, please get in touch.

1 Comment
 

What’s the difference between session duration and attempt duration? Timestamp or Stored? When should you record time taken and how can you report it? This series of blogs will look specifically at duration and how to report on it.

Let’s start our first blog by making sure you’ve got a good understanding of what duration is.

At a basic level, duration is how much time something took. In SCORM there are three variables that match this description:

  • cmi.core.total_time: the total amount of time the learner spent in the course from first launch through to completion. This is the time taken to complete one attempt, so I call this attempt duration.

  • cmi.core.session_time: the total amount of time the learner spent in the course on one occasion. This is the time spent in the course on a given session, so I call this session duration. It’s quite common for attempts to be split out over multiple sessions. It’s also possible to have multiple attempts in one session; this is less common with e-learning, but more common with some other types of experience such as games.

  • cmi.interactions.n.latency: the time taken to respond to an interaction. This is attempt duration for interactions; it’s assumed that interactions don’t span sessions so there’s no such thing as interaction session time.

In Tin Can world the concepts are the same. Whether we’re tracking traditional e-learning or something new and shiney, we either want to track how much time somebody spent doing something (session duration) or how long it took somebody to achieve something (attempt duration).

In his Deep Dive into Result, Brian Miller outlined how you can use the optional Duration property to record exactly how long the experience took. This is the best mechanism for activity providers to clearly state how long something took (see below).

Brian’s Deep Dive into Extras outlined the statement’s timestamp and stored properties. Brian explained how they can be used to support offline tracking. The Stored property tells you when the statement was stored in the LRS whilst the Timestamp is when the event actually happened. All statements will have a timestamp because even if the Activity Provider doesn’t include it, the LRS will fill it in. This makes Timestamp a valuable fallback to calculate duration if it’s not explicitly provided by the Activity Provider.

So now you know what duration is, how to you do track and report on it in practice? Tracking duration is something that’s been done reasonably badly by many activity providers to date, so that next blog in the series will help them do that better. The third blog will be aimed at reporting tools interpreting the statements. We’ll briefly looking at how to report on statements that follow the format described in blog two (spoiler: it’s easy) before moving onto some tricks to get the data out of statements that might not be so well crafted.

Watch this space for the next blog in this series coming next week, and in the meantime if you’ve got questions please get in touch.

No Comments
 

Updating the Prototypes

Posted by

Categories: Best Practices, Ideas, Recipes, Statements, Tin Can

Posted 14 April 2015

 

The prototypes have been with us since the beginning. Recently I’ve given them an update from a tracking design perspective.

In the beginning, before Tin Can version 0.9 and before you or I had even heard of Tin Can, there were the prototypes. These example activities helped the world see the kinds of thing that might be possible with Tin Can, and provided a reference point for early adopters and developers to see how statements could be sent and retrieved. I used the Golf example as a reference when building a Tin Can wrapper for Captivate 5.0 swf files a couple of years ago.

Over the years, the prototypes have been updated to be conformant with the latest released version of Tin Can, now 1.0.x. They are now less important in illustrating what’s possible with Tin Can because there’s actual real life products doing exciting things in the real world. They’ve continued to be a good starting point for developers to see the mechanics of how Tin Can works.

The prototypes were never designed to be examples of good practice or to illustrate the best statement structure. They weren’t supposed to tell you exactly what data you should track or which verbs and activity types you should use. In fact, the prototypes were created at a time before we had good practice in Tin Can. Whilst we’ve updated them to be technically conformant with each version of the specification, we’ve not touched the actual data that was sent and events we were tracking.

Until now.

As I’ve been working with various Tin Can adopters, it’s become apparent that many are looking at the prototypes as a design template for what statements to send and the properties to include. Many adopters have looked beyond the prototypes to the various blogs and guides available online, but the prototypes are a significant influence on how people are designing their Tin Can statements.

I’m also dishing out plenty of advice on how to implement Tin Can well, but a lot of the times the prototypes weren’t following that advice. I figured it was time to apply that advice to the prototypes and the data they’re sending.

We have…

  • Updated the index page to make use of Context Registration and issue a ‘launched’ statement. This now serves as a closer example of how we’d expect an LMS to behave when launching content. Registration is now a required property in the config file.
  • Refreshed the verb, activity type and extension IRIs used throughout to include a wider range taken from the Registry (which didn’t exist when the prototypes were first born). Instead of reporting ‘Andrew experienced the Country Music Hall of Fame’ in the locator prototype, we now report ‘Andrew was at the Country Music Hall of Fame’. Much better.
  • Documented all the verbs, activity ids, activity types and extensions used and recorded this in a Registry profile.
  • Added ‘initialized’, ‘terminated’, ‘suspended’ and ‘resumed’ statements and made some other changes so that we do a better job of tracking session and attempt duration. Duration tracking is one of the most common Tin Can related questions we get. We included a couple of additions to TinCanJS’s utility functions to support this. (More on duration tracking in a blog sometime soon!)
  • Improved the way bookmarking data is saved in the Golf example to give an example of storing and amending a JSON object within the State. We included some new functionality in TinCanJS’s setState method to make this possible.
  • Improved the tracking data sent when switching players in the Tetris example.
  • Added a ‘Save & Exit’ button to the Golf example as an example of good practice to avoid losing data when learners close the window.
  • Added two types of “category” Context Activity to record the Recipe being used and the original source of the content (more on this in a blog soon!)

The statements generated by the new prototypes will not be compatible with the old ones, so if you’re using the prototypes for testing, you’ll need to keep this in mind. In fact, one of the reasons we’ve held off updating the prototypes is that we didn’t want to break anyone’s use of them. You’ll need to consider this issue of backwards compatibility as you come to update your own products.

The easiest solution is to get the design right the first time, but that’s not always possible as requirements and best practices develop over time. We work around this issue in the new prototypes by tagging every statement with a Recipe Id as a “category” Context Activity. The next time we update the prototypes, we’ll also update the Recipe and update the Id to point to the new version. Any tools reporting on the prototypes could use that property to see which version of the statement structure is being used and handle the data accordingly, though the reports included with the prototypes don’t yet do this.

There’s still more that could be done to improve the design of the prototypes and take full advantage of Tin Can. These are prototypes, not products, so I recommend you look to other sources (like this blog or my eLearning Guild course) for advice on how to best design your tracking. That said, the prototypes are now providing a better base line for you to build and improve on in your design and development.

Some examples of how the prototypes could be developed further are:

  • Launching activities in the same window as the launcher and returning there at the end of the activity; modifying the UI accordingly.
  • Passing language preference or other learner preferences from the launcher to the activity and displaying localized or personalized content.
  • More thought into the user experience of returning to an attempt within the Golf example, including saving progress within the assessment.
  • Allowing the user to review their quiz answers and the content after an attempt and tracking this using the https://brindlewaye.com/xAPITerms/verbs/reviewed/ verb.
  • Server side tracked versions of the prototypes, including statement signing for the Golf assessment.
  • Taking recipe version into account within the reports.
  • More interesting and visual reports and dashboards targeting different stakeholder groups.
  • Move varied use of media including Tin Can tracked video and audio within the Golf example.

We may get onto these in the future, but for now, please consider how you might apply them in your real-world products!

If you’re sending Tin Can statements and would like somebody to review and feedback on your tracking, we’re happy to help; please get in touch.

5 Comments
 

Tin Can API Email Updates

* indicates required

Tin Can API Email Updates

Thanks for signing up for the Tin Can API newsletter!

Make sure to follow us on Twitter @ProjectTinCan,
and tweet this page to let others know about the Tin Can API.

Close