I firmly believe in a very qualitative approach toward describing deeper learning. We can see deeper learning through student interviews and reflections, observing the work being done by students in and outside classrooms, and through the products students produce to demonstrate or apply their learning. And, of course, a project-based approach toward learning often includes a product.
Earlier this summer, I wanted to help demystify deeper learning through a model that looked at different facets (ingredients, themes, components) that define a deeper approach toward learning. I called these Pathways Toward Deeper Learning and have lately wanted to think about how to visualize and quantify these pathways when we see learning in action.
I wanted to show depth in a visual, like going down into a cavern, or going towards the core of the Earth, or… maybe drilling down into a “mountain of knowledge.” Figuring out how to take data and map it to something like that was going to take a lot of time for what I finally admitted was a cute gimmick.
Then, in talking to Bill Rankin, he showed me a developing model he was creating called cubic learning. And he’d already begun to think about how to look at learning through some visual means. We liked the three planes of learning into how I saw deeper learning.
Then the suggestion was made (by my peer Sean Campbell) to consider a radar plot. I’d thought about this too, but wasn’t sure it was adequate in two dimensions for showing depth. But if we stop and think of this less as a three-dimensional concept in our mind (a play on words, really), then let’s think about area. That’s when I thought about paint or ink splotches you might see on the wall (or an art classroom). Then the question becomes, how big is your paint blob?
If I then take the scores (using a four-point scale) and plot them, I get the outline of my paint blob.
There’s a few goals with this. We typically want to match-up our pathways, so that we’re matching a level 3 in each area across the board. But I also want to look at a lesson in a numerical sense, so beyond the aid of the radar plot, I want an overall metric for the “depth” of the observed learning.
Showing that by itself is non-sensical, unless you want to see it in relation to a larger scale. But to take multiple learning experiences and to compare them by depth, this index value might be interesting.
To compute the index value, I grouped the content and Depth of Knowledge pathways together as an overall “Content” value. This gets multiplied by the average of the three twenty-first century skills ratings across Mishra’s three groupings and the “context” score. And finally, we multiply that to the square root of the technology score and the community score. The resulting range of results goes from 4 to 448 (see more on the formula, below). If we really wanted to visualize that, we could consider this index score, say, an area factor for a circle (bubble) plot. More interesting, say, to a teacher, might be a series of lessons, all computed this way, with the values presented as a line chart or sparkline.
I will continue to tinker with the ideas behind how to capture depth of learning and how to communicate this. I believe qualitative data is also important, and most important for teachers, I believe, is to know where their design of learning fits into what they feel students need. In generating these scores, it doesn’t speak to good instruction versus bad. It’s about a way to conceptualize depth in learning to compare one activity, lesson, or unit to another.
Instructional Depth Index
Among the pathways, I see pairings. And I wanted these pairings weighted.
To compute “twenty-first century skills,” the evaluator rates, on a scale of 1 (no skills present) to 4 (mastery level), on how students are exhibiting foundational skills, meta skills, and humanistic skills. Then we average these three ratings to generate a “skills” score.
I balance content and depth of knowledge. The maximum score is an 8.
I unbalance twenty first century skills with context, generating a top score of a 7.
Then in combining community and tools (think: resources), I believe the wetware outweighs the digital code and hardware by an entire power, so I used a square root. This makes the top tech score a 2, and the top social/community score a 4.
The equation looks like this:
IDS = (content + DOK) * (context + average of skills) * (community + sqrt(technology use))
This is admittedly an evolution of my thinking about this and I’m indebted to Dr. Rankin for ways we can think about modeling or seeing learning among different facets.