8 minute read

Solving Personalized Learning’s Data Crisis In 4 Steps

Caleb Shull

Former Copywriter

Educators have always wanted to provide learners with the best possible learning experiences. That means becoming familiar with how different people learn best, and providing them with content and delivery methods that are best for them. That’s really all that personalized learning is.

As we enter a period of rapid iteration and development in the software world, where new and increasingly powerful AI tools are coming online, it’s looking more and more possible to deliver highly personalized content at scale using AI. But there’s one massive obstacle between most organizations and developing that kind of personalized learning: data management.

Let’s discuss why data is so critical to this application, and what training teams can do to get ready for a data-driven personalized learning program.

Delivering Personalization At Scale

It’s a bit of an oxymoron, isn’t it? How do we create a system that can give an unlimited number of users an experience tailored to their personal needs?

The answer is by creating a completely standardized process that can be applied to any learner. To scale, the business logic of personalization has to be almost completely impersonal.

The reason personalized learning has always been limited is that traditional modalities like ILT and mentorship can only personalize through time-consuming human intervention. There just aren’t enough instructor and mentor man-hours available at most organizations to deliver that to every learner.

AI in corporate learning, however, offers a solution – a repeatable, instantly-deliverable decision-making process that can personalize the learner’s experiences. However, to do so, those AIs need a tremendous amount of data about two very different things.

Data About Learners

Learners frequently express their own preferences about how they learn. Think of how many people self-identify as visual or auditory learners, for example. That kind of self-identification is the start of building a profile of a learner so that an AI built for corporate learning can deliver a personalized learning experience to them.

What kind of content has the learner already looked at, and what kind of engagement signifiers did they create? Did they re-watch a video within a course? Did they click on links to view further reading materials? Did they view the course library, and if they did, what kinds of things did they search for?

Personalizing the learner experience means gathering as much data as possible about the learner’s activities and preferences so that they can be served content that’s relevant, in a format that will keep them engaged.

Data About Business Objectives

However, learner engagement is only a secondary goal. We want learners to be engaged with content that helps them develop skills and knowledge in service of business objectives. Perhaps your sales representatives find courses about software engineering very interesting and engaging, but unless you are trying to upskill them into engineers, that engagement is not providing the business any value.

An AI providing personalized learning needs to be given business directions to push learners towards content that is relevant to their job titles, their managers’ requests and recommendations, the company’s talent management strategy, etc. Balancing these priorities with the learner’s preferences is critical, and making that decision requires an AI with unfettered access to as much data about the situation as possible.

Data Problems In Current Tech Stacks

The kind of detailed data on learners and business objectives needed to personalize learning to a high degree, just isn’t really accessible in most learning tech stacks – if it’s even in the system at all. Do you assess learners for whether they would prefer audio, visual, or experiential content? Do you have a skills inventory that would let an AI know not to recommend content that a learner is already familiar with? Can you link certificate expirations to your content library to push learners towards renewing their credentials?

For almost all training organizations, the answer to these questions is probably no. Training tech stacks are typically complex and poorly-integrated. Even if you’re collecting the right data, it’s likely spread out across the 9-12 different systems that a typical training team is using. And often, plenty of the data you need is held by other departments, not linked to the training function at all.

So, how can training teams start getting ready for a future driven by personalized learning that will require access to far more data than current teck stacks can support?

4 Steps To Prepare For AI-Powered Personalized Learning

1.) Assess Your Current Situation

No two tech stacks are the same, and no two training teams have exactly the same operational requirements and constraints. This is often the root of the problem, in fact. When teams try to use fairly standardized LMS software to handle their widely differing needs, you end up with complex, difficult-to-navigate and poorly-integrated software infrastructure.

So think about what a wish-list of data for a personalized learning solution might look like to you. Then take the time to dive into your systems and assess: where is that data right now? Are we collecting it? Is it spread out across multiple systems, for example partially in an LMS and partially in your HRIS? Are those systems integrated and accessible so that you could pull data out of them easily and automatically?

Don’t underestimate the level of detail that can be useful here. One Administrate client, Ping Identity, made several fundamental changes to their data architecture so that they could collect and process learners’ dietary information in order to cater learning events to their preferences – and that resulted in higher learner satisfaction and reduced waste for them.

2.) Centralize and Standardize Your Data Management

You can’t run powerful automation tools at scale if doing so requires a human manually shifting data around in the system. And it’s substantially more difficult to create that kind of automation when data isn’t centralized. Ideally, you want a Single Source of Truth for your entire training operation, to the extent that it’s possible.
By reorienting your software to pull from a single central source of data, such as a data lake, you can not only eliminate lots of costly inefficiencies and duplications within your datasets, you can make your tech stack substantially easier to automate and manage.

This step is also highly dependent on your situation. For example, sensitive data such as payment credentials probably shouldn’t be made easily accessible within a data lake. This is why it’s critical to first assess your needs before starting to make changes to your software.

3.) Assess Employee Needs

Once you have your own systems in a workable place, take stock of your employees’ needs. What kind of personalization is going to serve them best? This is where that wish-list of data from the first step comes into play.

Do you want to serve them primarily content that will help them keep their credentials updated? Then you’ll need data about their credentials, of course, and you’ll need automation and AI that utilizes that data. Your personalization will look very different if your goal is primarily to upskill employees and push them towards content that will increase their qualifications.

If part of your personalization drive will include creating content in different modalities to cater to different types of learners, this is where you should start to plan those different types of content out.

4.) Put Your Data To Work

Iimplementing personalized learning will look different for every team, depending on your needs and the needs of the business. Automations, especially learner-facing automations that respond dynamically to learner inputs and actions, are key. Ensuring that these automations are deeply integrated into the rest of your tech stack is also essential to maximizing their value.

Personalization of learning will be dependent on the development of powerful artificial intelligence for corporate learning applications. As these tools become more powerful, their potential to create highly customizable learning experiences will only increase. However, they are all dependent on having the data that they need to function – and that is a process that doesn’t need to wait for these tools to be developed. Starting to improve your data infrastructure today is a critical first step towards flexibility and capability tomorrow.

Support Personalized Learning With Administrate

If advanced capabilities like delivering personalized learning are on your horizon, you need the architecture and the infrastructure to support it. Powerful tools just don’t work on shaky foundations. Administrate has years of experience in building data-driven solutions for enterprise training teams that help them unlock the full potential of their data. Visit our KPI Reporting page to learn more about our platform’s data management capabilities.

Caleb Shull was a Copywriter at Administrate.

Subscribe

Join thousands of training leaders around the world who have our content delivered straight to their inbox.