Contact Us Support Forum Get Email Updates
 
 

Thanks! Someone will be in touch with you shortly.

Rather just email us? Email us here.
Rather speak with someone in person?
Call any time with Tin Can questions:

866.497.2676

Archive for the "Best Practices" Category

We’ve launched a new services group.

 

 

Some Background

For years, we have relied on our products to be the solution to a number of complex problems facing companies that use learning standards. If you’re building an LMS or authoring tool and you need AICC or SCORM or, more recently, xAPI, we have a product that can do the heavy lifting. That’s been our bread and butter.

But we also have insights from years of thinking about experiential data and hearing how customers report on it. And we know that the problem isn’t always solved at the immediate boundary of our products.

It’s those considerations that brought our services group to life.

What We Do

We help vendors and organizations consider how to use learning standards to accomplish their goals. These goals include delivering the learning material to their people and selling their products to discerning buyers.

We work on problems related to the learning standards AICC, SCORM, and xAPI.

In the case of xAPI, the newest of the standards, the off-the-shelf solutions are less mature. Listening carefully and collaboratively helps us build better products, but it also helps us get you the right solution now.

Of course, we haven’t stopped thinking about AICC and SCORM.

Where You Come In

We want you to ask us a question. You can learn more about how we’re responding to the questions we’ve already heard here. These are things we anticipate. Maybe something on this list prompts a question you were getting ready to ask. So, ask away– we’re listening and ready to help.

No Comments
 

Semantic Interoperability

Posted by

Categories: Best Practices, Ideas, Recipes, Registry, Standards, Tin Can

Posted 24 January 2017

 

whatis-themeaning-meme-25449

 

You’re going to hear us talk a lot about semantic interoperability this year. So we might as well present a working definition.

Semantic interoperability is when information—the meaning behind captured data—is 
portable and well understood by any subsequent system requesting and reviewing it.

 

Why will we be talking about it a lot? Because without semantic interoperability, the Experience API (xAPI) has a limited future.

For us, semantic interoperability in xAPI will be achieved when there is a generally accepted information model. We expect profiles to help with this a great deal. There’s a strong possibility that collaborative work between ADL and IMS could help a great deal.

Then: Too Many Constraints

Consider SCORM, the usage of which remains widespread in LMSs everywhere. The CMI data model leveraged by SCORM is closely linked to its information model. There is a finite set of data that can be recorded about the types of learning experiences common to online training, and summarizing information from that data is a relatively straightforward exercise. So straightforward, in fact, that practitioners have long cared primarily about a big four—score, completion, satisfaction (i.e., pass/fail), and duration. SCORM makes requesting and understanding the big four easy.

Now: Too Much Flexibility

xAPI, on the other hand, is fundamentally a communication protocol applied as a specification for elearning. In xAPI, apart from a few familiar holdovers from SCORM (the big four, native support for interactions), there is no limit to what can be captured about a given learning experience. One could literally choose any verb available in any language. Or one could create a new activity definition to describe any type of experience.

Can you imagine how difficult it would be to report on data with so few constraints? We can. Because we’ve been trying.

Needed: Leadership

Even when there is consensus that a concept has sufficient value to merit a profile, there can be difficulty. Take video, for instance. Not only is there a profile in our Registry, there’s also a Community of Practice still working on a version. If there are multiple working versions of what data to capture, then how is a reporting system attempting to derive meaning about “video” supposed to do so?

We think the answer right now is: leadership. The concept of Registry has utility to semantic interoperability in xAPI, and we have a feature roadmap for it. Still, we recognize the difficulty in a single industry participant to establish credibility and trust.

What would alternatives look like? We think ADL could assert an information model. As a subtle alternative, ADL could host a collaborative process with some authority. This might look like the establishment of a baseline with a community process similar to how the specification itself operates now—managed workflow in GitHub supported by regular calls.

Expect to hear more from us on this topic because we think it’s critical to the future success of xAPI.

No Comments
 

Statements come from all kinds of places: content created in authoring tools, mobile apps, learning platforms and business systems. It’s not always immediately obvious which application the statement came from, which might be useful to know. This blog explains how you can tag the statements your tool or product generates and why that information is useful.

We’ve worked hard to make the Tin Can (xAPI) spec as clear as possible and have required Learning Record Stores (LRSs) to validate incoming data to ensure the same data structure is always used. There’s no way for statements to be sent to a conformant LRS unless they follow the prescribed data structure, and you’ll find that the major LRSs are strict with the data they accept.
MORE…

1 Comment
 

On June 2nd, 2015, we were part of a joint webinar presented by Bersin and CUES. As usual, the attendees had more questions than we could answer during the live webinar, so we’ve posted the questions and Andrew Downes’ answers here.

General

  Questions:

  • What is the common identifier that allows the xAPI to connect the data provided by the ‘activity provider’ to a specific user in the LRS?

Name of Answerer: Andrew Downes

Answer: Tin Can allows for learners to be identified by email address, Open ID or an account on some system, such as an LMS. For privacy reasons, a hashed version of the email address can also be used. In practice, most implementations either use email or an LMS account id. Activity Providers always need to know who a learner is in order to send the LRS data about that learner, which can either be done by having the learner log into the activity provider, or some kind of launch/single-sign-on process.

It’s possible that different Activity Providers might use different identifiers for the same person, for example accounts on different systems or different email addresses. In this case it’s important for the LRS to have a record of all the identifiers that relate to a single person. Activity Providers can request this information from the LRS if they need it (and if the LRS gives them permission).

See Brian Miller’s Deep Dive blog for a deeper dive.
MORE…

No Comments
 

What’s the difference between session duration and attempt duration? Timestamp or Stored? When should you record time taken and how can you report it? This series of blogs looks specifically at duration and how to report on it.

As a provider of an LMS, LRS or other system launching content and reporting on duration information, you can use the table presented last week as a guide for reporting. In an ideal world, you can simply look at the Result Duration property of the completed/passed/failed, suspended and terminated statements to grab your attempt and session durations. Win!

Handling limited data

Unfortunately, the world is not an ideal place. In practice, many Activity Providers have not implemented duration at all, or are only reporting duration at activity completion, leaving the report viewer wondering about the time spent by learners on partially completed attempts. Many early adopters, who designed their statements before the best practice I described last week emerged, are understandably waiting for the release of CMI5 before updating their statement structure.

As an LMS provider that leaves you with two options:

  1. Encourage your activity providers to improve the data they’re sending (point them to this blog series).

  2. Work with the data they provide or you can gather yourself.

Working with the data you have most likely means using Timestamp to calculate duration. For session duration, you can simply take the first and last statements issued in a session and subtract! The harder part is working out the break points between sessions, especially if the learner re-launches the experience soon after leaving it. The following guidelines will help:

  • As the LMS launching the experience, you should know when the session started. In fact it’s good practice for the LMS itself to issue a statement using the verb http://adlnet.gov/expapi/verbs/launched to indicate that it launched the experience. This means that even if the Activity Provider never issues a single statement, you know when experience was launched. This is essential for reporting if the experience can be launched from multiple systems and the Activity Provider is not sending the data you need.

  • When the learner has launched the experience again, you can assume that the previous session ended at about the time of the previous statement before that new launch.

  • When the learner hasn’t launched the experience again, you can assume that either the session is still in progress, or the last statement issued represents the end of the session.

  • To work out if the session is still in progress, you’ll need to define a session timeout period. If the activity provider is doing client side JavaScript tracking, then the LRS should define a timeout for the launch security credentials and you can use that same value. If not, define something sensible for the types of experience you’re launching. Any statements issued after the timeout period you define can be considered a new session.

Attempt duration can be harder or even impossible depending on what data the Activity Provider sends. If you can follow them, you can use the rules below in priority order depending on what data the Activity Provider sends:

  • If the Activity Provider sends a ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement with a Result Duration, then take this as the attempt duration. If more than one of these statements are sent, the latest one in a given attempt will represent the latest duration.

  • If the Activity Provider sends an ‘attempted’ statement with a Result Duration of zero then this marks the start of the attempt for the purposes of calculating attempt duration.

  • If the Activity Provider sends a ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement without a Result Duration, then then the latest of these within an attempt marks the end of that attempt. Add up the session durations of all sessions within that attempt.

  • Assume that the last statement (excluding ‘launch’ and ‘initialized’) before an ‘attempted’ statement with a Result Duration of zero was the last statement in that previous attempt.

  • If Result Duration is not used by an Activity Provider but they use the ‘attempted’ statement correctly, you can calculate the end of a previous attempt as the latest ‘suspended’, ‘completed’, ‘passed’ or ‘failed’ statement before an ‘attempted’ statement.

  • If Result Duration is not used by an Activity Provider and they use the ‘attempted’ statement incorrectly, then it may not be possible to accurately track the start and end of an attempt. The only sensible solution here is either not report attempt duration for these activities or allow your administrators to configure how duration is reported on a per activity basis.

As you can see, reporting on limited data from Activity Providers is hard! This complexity can be avoided by Activity Providers sending the data as outlined last week. If they don’t and you really need to report on their data, we can help.

1 Comment
 

Tin Can API Email Updates

* indicates required

Tin Can API Email Updates

Thanks for signing up for the Tin Can API newsletter!

Make sure to follow us on Twitter @ProjectTinCan,
and tweet this page to let others know about the Tin Can API.

Close