Available for download is our May edition of the Technology Times newsletter.
In this edition:
Available for download is our May edition of the Technology Times newsletter.
In this edition:
In thinking about instructional design, and more specifically a systems-derived approach such as the model created by Walter Dick, Lou Carey, and James Cary, I wanted to think about how we as educators improve what we do about student learning, from one iteration to the next.
There’s something very personal about teaching students. Our assessment about how learning went, relying of course upon our teaching, is concerned with the individuals for whom we’ve designed the instruction. The same delivery, materials, and information with one group might have very different learning outcomes than with another. That’s a variation most educators understand. Each of us is unique and every student brings a unique set of experiences with them to school. That’s not to say they are all totally different, but they are, either by a small bit, or a lot, unique. That’s because, I believe (as a constructivist), knowledge is something we create in our brains. It’s unique to us because it’s built upon our own experiences, from our own perspectives.
In the so-called Dick and Carey model, they include a loop to “design and conduct formative evaluation of instruction,” at least they do so in their fifth edition of their book (2001, Addison-Wesley). (To be fair, several editions have come out since then.) So to be clear, that’s not “do formative assessment of student learning” but rather “do formative assessment of your instruction.”
In many ways, teachers conduct this type of analysis by checking to see what stuck with students and what did not. What didn’t they remember, or conceptualize? What couldn’t they work with in the context of a project? As much as we rely upon this to correct faulty instruction, it really isn’t an assessment of the instruction as much as it is an assessment of the intended behaviors. It’s easy to see that something didn’t work. But it takes more research to find out why.
That’s why I considered involving student feedback as part of the instruction cycle in projects. In theory finding out what students thought about a learning experience sounds like a good idea, I think. Student preferences may be helpful in designing future instruction, but more so, I believe it would be interesting to know if students experienced their learning in the ways we had designed them.
That said, not all student feedback mechanisms guarantee a change in instructional quality. But it doesn’t mean that the act of collecting feedback is sour; feedback must be reviewed, hopefully understood, and with a plan to adjust and adapt to that feedback.
In Rankin’s model of cubic learning, he divides the components of learning into three “dimensions,” and in this example, he examines “content” in depth. I think this dimension is easy to reflect upon, especially so in terms of “delivered” to “created.”
With full disclosure, these are examples I created independent from the author, so I’m applying my own interpretation to these terms. In the first, I’m not just telling you the answers and asking you to write them down; I’m asking you to do “a little work” to see if you understand what themes are in literature. It’s more or less a comprehension check. It’s directed because I already focused you in on what you were responsible for knowing, or “filling in.” My direction is a type of scaffold.
To move beyond, toward “discovered,” I might say, “there are three major themes in this book. Identify the three themes and provide examples that support your answers.” It’s still black and white information, but now you’re responsible for digging it all up. It’s evaluation, from Bloom’s taxonomy.
In my “created” example your essay (paper, thesis, etc.) is a created work. Analysis is a higher cognitive skill; but you are practicing a skill. The result of that skill is the analysis itself, in the form of a five, or maybe seven-page essay. We might expect a student to go through this content study at the stages that preceded “created” in multiple stages of practice before they’d be ready for “creation.” (For instance, in the sixth grade the concept of themes might be delivered; in seventh grade, they are directed, and in freshman English, you write the paper I described).
Here’s my question: Are students experiencing the level(s) of depth in their learning that you in fact designed? Isn’t the point of the Dick and Carey “instructional formative assessment” to assess the instruction? Generating an intended behavior isn’t a bad check, but are students comfortable at the level we’ve chosen? And what should it mean if they are not? An adjustment should be made, I’d hope we’d surmise. Good feedback might be “this was a boring exercise for me, and I wasn’t challenged.” Or “I struggled with this. I was only successful because you reframed it for me and provided a scaffold.” (Yes, I know I’m betting big that students would talk to us like that, but that’s not my point. My point is the feedback can help us improve instruction, individualize it in some cases, and maybe even provide opportunities to make it personal.)
Asking for feedback on your teaching at the end of the year is too late. Many teachers know already what they think their students will say. We ask very general questions and highlights and rough spots are likely smoothed out in such “end of course” questionnaires.
Instead, what if we had three types of surveys.
Consider the third. (And by survey, I don’t mean these all have to be a Google Form. We can survey students by asking them just a few questions.) Things I think we should know are:
Interesting questions. And given more time, I could come up with more.
If a teacher could use feedback loops like this often, there’d be no rationale for an “end of course” survey. Students would know their teacher cares, at least in part because they are responsive to their learning needs, and they design instruction that addresses a student’s prior knowledge and preparedness for depth.
While I fully believe in the attempt to provide our students deeper learning experiences—meaning, ones that are relevant, have utility, and can be applied to solving real problems—I also recognize that instruction for deeper learning sometimes takes more time, requires higher cognitive engagement by the student, and in the hands of an inexperienced teacher, might just have more risk in terms of students achieving the intended standards of success.
Dick and Carey start with “assessing needs and ID goals,” then a parallel step of “analyzing learners” and “conducting instructional analysis.” In short, I need to know the learners, what they already know, what they need, and what type(s) of support they will require; I need to establish learning goals and then from those provide supporting experience to achieve those goals. The two processes go hand-in-hand. It isn’t new. But I’m wondering how often we actually ask students what they know and to what degree?
If we want more of our students to experience deeper learning (instead of frustration), we have to ask questions. And to analyze our instruction, we have to ask questions too.
Thanks for entertaining a read about my thoughts on deeper learning… For I believe deeper learning is helping maximize the potential of learners but when it is instructionally appropriate. And that’s why “measuring” the depth of learning isn’t a comment on the quality of a teacher’s performance, but rather is a measure of how we are meeting students where they are and prioritizing their needs.
I firmly believe in a very qualitative approach toward describing deeper learning. We can see deeper learning through student interviews and reflections, observing the work being done by students in and outside classrooms, and through the products students produce to demonstrate or apply their learning. And, of course, a project-based approach toward learning often includes a product.
Earlier this summer, I wanted to help demystify deeper learning through a model that looked at different facets (ingredients, themes, components) that define a deeper approach toward learning. I called these Pathways Toward Deeper Learning and have lately wanted to think about how to visualize and quantify these pathways when we see learning in action.
I wanted to show depth in a visual, like going down into a cavern, or going towards the core of the Earth, or… maybe drilling down into a “mountain of knowledge.” Figuring out how to take data and map it to something like that was going to take a lot of time for what I finally admitted was a cute gimmick.
Then, in talking to Bill Rankin, he showed me a developing model he was creating called cubic learning. And he’d already begun to think about how to look at learning through some visual means. We liked the three planes of learning into how I saw deeper learning.
Then the suggestion was made (by my peer Sean Campbell) to consider a radar plot. I’d thought about this too, but wasn’t sure it was adequate in two dimensions for showing depth. But if we stop and think of this less as a three-dimensional concept in our mind (a play on words, really), then let’s think about area. That’s when I thought about paint or ink splotches you might see on the wall (or an art classroom). Then the question becomes, how big is your paint blob?
If I then take the scores (using a four-point scale) and plot them, I get the outline of my paint blob.
There’s a few goals with this. We typically want to match-up our pathways, so that we’re matching a level 3 in each area across the board. But I also want to look at a lesson in a numerical sense, so beyond the aid of the radar plot, I want an overall metric for the “depth” of the observed learning.
Showing that by itself is non-sensical, unless you want to see it in relation to a larger scale. But to take multiple learning experiences and to compare them by depth, this index value might be interesting.
To compute the index value, I grouped the content and Depth of Knowledge pathways together as an overall “Content” value. This gets multiplied by the average of the three twenty-first century skills ratings across Mishra’s three groupings and the “context” score. And finally, we multiply that to the square root of the technology score and the community score. The resulting range of results goes from 4 to 448 (see more on the formula, below). If we really wanted to visualize that, we could consider this index score, say, an area factor for a circle (bubble) plot. More interesting, say, to a teacher, might be a series of lessons, all computed this way, with the values presented as a line chart or sparkline.
I will continue to tinker with the ideas behind how to capture depth of learning and how to communicate this. I believe qualitative data is also important, and most important for teachers, I believe, is to know where their design of learning fits into what they feel students need. In generating these scores, it doesn’t speak to good instruction versus bad. It’s about a way to conceptualize depth in learning to compare one activity, lesson, or unit to another.
Among the pathways, I see pairings. And I wanted these pairings weighted.
To compute “twenty-first century skills,” the evaluator rates, on a scale of 1 (no skills present) to 4 (mastery level), on how students are exhibiting foundational skills, meta skills, and humanistic skills. Then we average these three ratings to generate a “skills” score.
I balance content and depth of knowledge. The maximum score is an 8.
I unbalance twenty first century skills with context, generating a top score of a 7.
Then in combining community and tools (think: resources), I believe the wetware outweighs the digital code and hardware by an entire power, so I used a square root. This makes the top tech score a 2, and the top social/community score a 4.
The equation looks like this:
IDS = (content + DOK) * (context + average of skills) * (community + sqrt(technology use))
This is admittedly an evolution of my thinking about this and I’m indebted to Dr. Rankin for ways we can think about modeling or seeing learning among different facets.
I’d like a tool to help us see and measure what we mean by Deeper Learning. There are a lot of definitions for deeper learning, and there are a lot of ways and models for measuring components of teaching. Learning is more difficult, but there are models there too.
I’m not sure this great or not, but our leadership team this summer was inspired by a presentation by Dr. Rankin (at our strategic innovation symposium) where he shared his model for learning he terms “cubic learning.” What was interesting for me was that we never really said much about the “tools.” It’s not a “how are you using your iPad or laptop” model, it was a look at how we learn in a formalized way.
Bill’s cube helped me see learning as facets. I chose the word “pathways” to consider because I see teachers emphasizing some things over others. It may be deliberate, or it may be a strength. I tried taking all these ideas into something that could tell us “how deep is it?”
I hope to start sharing this soon within Goochland and try to refine it more. In the interest of open commentary, I’d invite you to take a look.
Edutopia (2014). Using Webb’s Depth of Knowledge to Increase Rigor. Accessed from https://www.edutopia.org/blog/webbs-depth-knowledge-increase-rigor-gerald-aungst
Mishra, P., Metha, R. (2017). What we educators get wrong about 21st-century learning: results of a survey. Journal of Digital Learning in Teacher Education, 33:1, 6-19. Accessed from http://www.punyamishra.com/wp-content/uploads/2016/12/Mishra-Mehta-21stCenturyMyths-2016.pdf
Puentedura, R. (n.d.). The SAMR Model: Background and Exemplars. Accessed from http://www.hippasus.com/rrpweblog/archives/2012/08/23/SAMR_BackgroundExemplars.pdf
Rankin, W. (2016, December 9). “Formal” learning. unfolded learning [weblog]. Accessed from: https://unfoldlearning.net/2016/12/09/formal-learning/
William and Flora Hewlett Foundation. (2017, May). Decoding deeper learning in the classroom. Accessed from: https://www.hewlett.org/wp-content/uploads/2017/06/DL-guide.pdf
In 2000 I had the opportunity to attend my first VSTE Conference thanks to the support of the tech leaders here in Goochland for allowing a new teacher to immerse himself in other ideas from educators from around the state. It was the start of a tradition for me. Since that time, I’ve attended every VSTE conference since. For two terms I served on its board of directors. Later in 2002, I began representing Goochland at a metro-Richmond meeting of the Greater Richmond Area Educational Technology Consortium (GRAETC). I was an active member of that organization for 14 years.
I was so proud this morning to see us affiliated with both organizations in this morning’s broadcast by VSTE of Digital Learning Day 2017. Three of Mr. Frago’s students appeared on a live broadcast this morning with Dr. Richardson to talk about the Scrum process used in their class to facilitate project-based learning. Ms. Leiderman, our secondary technology coach, helped facilitate the conversation and made the interview of students possible.
Scrum is well-aligned with some of the ideas collected by Canadian educator and author George Couros about learning. In his book The Innovator’s Mindset, he juxtaposes traditional ideas about school with ideas about learning. Here are some examples:
I am passionate that there are great examples each and every day where technology is empowering and amplifying learning here in Goochland. On this #DLDay 2017, I’m so proud of our students and our teachers being featured on this webcast for the benefit of others across the Commonwealth and the world.
For more on Scrum, check out our teacher’s Scrum blog
Scrum is a method by which software developers work together on efficient teams to rapidly collaborate on application development. Focused around the idea that the effort of many can trump the efficiency of just one person, Scrum, or the process as it is called– scrumming–helps companies that employ the technique to maximize the potential of its team members.
So why did the International Society for Technology in Education (ISTE) choose to feature an article highlighting the work of educators Joe Beasley (GES) and James Frago (GMS) in its quarterly publication, entrsekt? Beasley and Frago began using Scrum this past school year (first at GES in Beasley’s classroom, and later at GMS, when Frago became a long-term substitute teacher) and really noticed that the process or method for working together was helping their students.
In the picture above, Beasley’s students at GES are using the Scrum technique, using a shared file folder and sticky notes (called a Scrum board) to organize their efforts at a building project in Minecraft EDU as part of a unit on literature they’re reading in language arts.
The magazine is distributed to ISTE’s members all over the world. The publication of the article highlighting their experience using Scrum in the classroom debuted at this year’s ISTE conference in Denver. I am so proud of Joe and Jim for trying this in our schools and finding success in showing kids how to work together effectively towards common goals. Watching kids using Scrum in the classroom is exciting, because amid peaks of chaos that can result from a number of simultaneous discussions taking place among each team, we see students truly engaged with their tasks, communicating effectively, and developing their skills and knowledge in a very learner-centered modality. It is especially exciting to see our students featured in this article, and to have Goochland County Public Schools represented on the world stage with educational technology.
My thanks go out to Bea Leiderman, our secondary ITRT, who has taken a special interest in Scrum and helped make possible the opportunity at ISTE for Joe, Bea, and Jim to present the technique and the outcomes of the method with a captive audience. We are hoping a free version of the magazine becomes published soon for all of our local stakeholders to read.
I will be presenting at the 30th Annual VSTE Conference in Roanoke, Virginia. My presentation title is On the Road to Deeper Learning, and it will focus upon the vision behind our 1:1 program.