Available for download is our May edition of the Technology Times newsletter.
In this edition:
Available for download is our May edition of the Technology Times newsletter.
In this edition:
This summer, we’re offering two different training workshops series.
We’ll cover how Seesaw operates as a learning management solution and communications platform for students. And inside the classroom, learn best practices at managing Apple Classroom and pushing content to students on iPad.
We’ll survey participants about what they’d like to learn and how to begin app-smashing with our collection of creative apps: drawing, pictures, video, book-making, and more.
We’ll cover apps that support language development, digital reading, and listening centers. Supercharge the Daily 5 with word study, writing, and fluency.
We’ll look at different apps to support math learning and practice.
When I started work in Goochland in 1999, I was handed a newsletter as part of my orientation packet, entitled the Technology Times. After leaving the classroom, I kept up that newsletter, to showcase changes in technology, to advertise after-school workshops, and to promote effective uses of technology in the classroom.
In the mid-2000s, we changed the format to this blog, since so many of us were then blogging as part of our district-wide initiative. Then e-mails followed. The challenge is always how to put quality information in front of our staff when we seem overloaded with so much information.
So I wanted to bring back the newsletter in an effort to showcase some of the wonderful things that are happening in our division every week. The task is steep, so we settled on a monthly publication. See our first three editions of the 2018 year, attached below. I hope this encourages a more leisurely time to consider what’s happening across the division and celebrate our innovation with new tools.
In thinking about instructional design, and more specifically a systems-derived approach such as the model created by Walter Dick, Lou Carey, and James Cary, I wanted to think about how we as educators improve what we do about student learning, from one iteration to the next.
There’s something very personal about teaching students. Our assessment about how learning went, relying of course upon our teaching, is concerned with the individuals for whom we’ve designed the instruction. The same delivery, materials, and information with one group might have very different learning outcomes than with another. That’s a variation most educators understand. Each of us is unique and every student brings a unique set of experiences with them to school. That’s not to say they are all totally different, but they are, either by a small bit, or a lot, unique. That’s because, I believe (as a constructivist), knowledge is something we create in our brains. It’s unique to us because it’s built upon our own experiences, from our own perspectives.
In the so-called Dick and Carey model, they include a loop to “design and conduct formative evaluation of instruction,” at least they do so in their fifth edition of their book (2001, Addison-Wesley). (To be fair, several editions have come out since then.) So to be clear, that’s not “do formative assessment of student learning” but rather “do formative assessment of your instruction.”
In many ways, teachers conduct this type of analysis by checking to see what stuck with students and what did not. What didn’t they remember, or conceptualize? What couldn’t they work with in the context of a project? As much as we rely upon this to correct faulty instruction, it really isn’t an assessment of the instruction as much as it is an assessment of the intended behaviors. It’s easy to see that something didn’t work. But it takes more research to find out why.
That’s why I considered involving student feedback as part of the instruction cycle in projects. In theory finding out what students thought about a learning experience sounds like a good idea, I think. Student preferences may be helpful in designing future instruction, but more so, I believe it would be interesting to know if students experienced their learning in the ways we had designed them.
That said, not all student feedback mechanisms guarantee a change in instructional quality. But it doesn’t mean that the act of collecting feedback is sour; feedback must be reviewed, hopefully understood, and with a plan to adjust and adapt to that feedback.
In Rankin’s model of cubic learning, he divides the components of learning into three “dimensions,” and in this example, he examines “content” in depth. I think this dimension is easy to reflect upon, especially so in terms of “delivered” to “created.”
With full disclosure, these are examples I created independent from the author, so I’m applying my own interpretation to these terms. In the first, I’m not just telling you the answers and asking you to write them down; I’m asking you to do “a little work” to see if you understand what themes are in literature. It’s more or less a comprehension check. It’s directed because I already focused you in on what you were responsible for knowing, or “filling in.” My direction is a type of scaffold.
To move beyond, toward “discovered,” I might say, “there are three major themes in this book. Identify the three themes and provide examples that support your answers.” It’s still black and white information, but now you’re responsible for digging it all up. It’s evaluation, from Bloom’s taxonomy.
In my “created” example your essay (paper, thesis, etc.) is a created work. Analysis is a higher cognitive skill; but you are practicing a skill. The result of that skill is the analysis itself, in the form of a five, or maybe seven-page essay. We might expect a student to go through this content study at the stages that preceded “created” in multiple stages of practice before they’d be ready for “creation.” (For instance, in the sixth grade the concept of themes might be delivered; in seventh grade, they are directed, and in freshman English, you write the paper I described).
Here’s my question: Are students experiencing the level(s) of depth in their learning that you in fact designed? Isn’t the point of the Dick and Carey “instructional formative assessment” to assess the instruction? Generating an intended behavior isn’t a bad check, but are students comfortable at the level we’ve chosen? And what should it mean if they are not? An adjustment should be made, I’d hope we’d surmise. Good feedback might be “this was a boring exercise for me, and I wasn’t challenged.” Or “I struggled with this. I was only successful because you reframed it for me and provided a scaffold.” (Yes, I know I’m betting big that students would talk to us like that, but that’s not my point. My point is the feedback can help us improve instruction, individualize it in some cases, and maybe even provide opportunities to make it personal.)
Asking for feedback on your teaching at the end of the year is too late. Many teachers know already what they think their students will say. We ask very general questions and highlights and rough spots are likely smoothed out in such “end of course” questionnaires.
Instead, what if we had three types of surveys.
Consider the third. (And by survey, I don’t mean these all have to be a Google Form. We can survey students by asking them just a few questions.) Things I think we should know are:
Interesting questions. And given more time, I could come up with more.
If a teacher could use feedback loops like this often, there’d be no rationale for an “end of course” survey. Students would know their teacher cares, at least in part because they are responsive to their learning needs, and they design instruction that addresses a student’s prior knowledge and preparedness for depth.
While I fully believe in the attempt to provide our students deeper learning experiences—meaning, ones that are relevant, have utility, and can be applied to solving real problems—I also recognize that instruction for deeper learning sometimes takes more time, requires higher cognitive engagement by the student, and in the hands of an inexperienced teacher, might just have more risk in terms of students achieving the intended standards of success.
Dick and Carey start with “assessing needs and ID goals,” then a parallel step of “analyzing learners” and “conducting instructional analysis.” In short, I need to know the learners, what they already know, what they need, and what type(s) of support they will require; I need to establish learning goals and then from those provide supporting experience to achieve those goals. The two processes go hand-in-hand. It isn’t new. But I’m wondering how often we actually ask students what they know and to what degree?
If we want more of our students to experience deeper learning (instead of frustration), we have to ask questions. And to analyze our instruction, we have to ask questions too.
Thanks for entertaining a read about my thoughts on deeper learning… For I believe deeper learning is helping maximize the potential of learners but when it is instructionally appropriate. And that’s why “measuring” the depth of learning isn’t a comment on the quality of a teacher’s performance, but rather is a measure of how we are meeting students where they are and prioritizing their needs.
I firmly believe in a very qualitative approach toward describing deeper learning. We can see deeper learning through student interviews and reflections, observing the work being done by students in and outside classrooms, and through the products students produce to demonstrate or apply their learning. And, of course, a project-based approach toward learning often includes a product.
Earlier this summer, I wanted to help demystify deeper learning through a model that looked at different facets (ingredients, themes, components) that define a deeper approach toward learning. I called these Pathways Toward Deeper Learning and have lately wanted to think about how to visualize and quantify these pathways when we see learning in action.
I wanted to show depth in a visual, like going down into a cavern, or going towards the core of the Earth, or… maybe drilling down into a “mountain of knowledge.” Figuring out how to take data and map it to something like that was going to take a lot of time for what I finally admitted was a cute gimmick.
Then, in talking to Bill Rankin, he showed me a developing model he was creating called cubic learning. And he’d already begun to think about how to look at learning through some visual means. We liked the three planes of learning into how I saw deeper learning.
Then the suggestion was made (by my peer Sean Campbell) to consider a radar plot. I’d thought about this too, but wasn’t sure it was adequate in two dimensions for showing depth. But if we stop and think of this less as a three-dimensional concept in our mind (a play on words, really), then let’s think about area. That’s when I thought about paint or ink splotches you might see on the wall (or an art classroom). Then the question becomes, how big is your paint blob?
If I then take the scores (using a four-point scale) and plot them, I get the outline of my paint blob.
There’s a few goals with this. We typically want to match-up our pathways, so that we’re matching a level 3 in each area across the board. But I also want to look at a lesson in a numerical sense, so beyond the aid of the radar plot, I want an overall metric for the “depth” of the observed learning.
Showing that by itself is non-sensical, unless you want to see it in relation to a larger scale. But to take multiple learning experiences and to compare them by depth, this index value might be interesting.
To compute the index value, I grouped the content and Depth of Knowledge pathways together as an overall “Content” value. This gets multiplied by the average of the three twenty-first century skills ratings across Mishra’s three groupings and the “context” score. And finally, we multiply that to the square root of the technology score and the community score. The resulting range of results goes from 4 to 448 (see more on the formula, below). If we really wanted to visualize that, we could consider this index score, say, an area factor for a circle (bubble) plot. More interesting, say, to a teacher, might be a series of lessons, all computed this way, with the values presented as a line chart or sparkline.
I will continue to tinker with the ideas behind how to capture depth of learning and how to communicate this. I believe qualitative data is also important, and most important for teachers, I believe, is to know where their design of learning fits into what they feel students need. In generating these scores, it doesn’t speak to good instruction versus bad. It’s about a way to conceptualize depth in learning to compare one activity, lesson, or unit to another.
Among the pathways, I see pairings. And I wanted these pairings weighted.
To compute “twenty-first century skills,” the evaluator rates, on a scale of 1 (no skills present) to 4 (mastery level), on how students are exhibiting foundational skills, meta skills, and humanistic skills. Then we average these three ratings to generate a “skills” score.
I balance content and depth of knowledge. The maximum score is an 8.
I unbalance twenty first century skills with context, generating a top score of a 7.
Then in combining community and tools (think: resources), I believe the wetware outweighs the digital code and hardware by an entire power, so I used a square root. This makes the top tech score a 2, and the top social/community score a 4.
The equation looks like this:
IDS = (content + DOK) * (context + average of skills) * (community + sqrt(technology use))
This is admittedly an evolution of my thinking about this and I’m indebted to Dr. Rankin for ways we can think about modeling or seeing learning among different facets.
I’d like a tool to help us see and measure what we mean by Deeper Learning. There are a lot of definitions for deeper learning, and there are a lot of ways and models for measuring components of teaching. Learning is more difficult, but there are models there too.
I’m not sure this great or not, but our leadership team this summer was inspired by a presentation by Dr. Rankin (at our strategic innovation symposium) where he shared his model for learning he terms “cubic learning.” What was interesting for me was that we never really said much about the “tools.” It’s not a “how are you using your iPad or laptop” model, it was a look at how we learn in a formalized way.
Bill’s cube helped me see learning as facets. I chose the word “pathways” to consider because I see teachers emphasizing some things over others. It may be deliberate, or it may be a strength. I tried taking all these ideas into something that could tell us “how deep is it?”
I hope to start sharing this soon within Goochland and try to refine it more. In the interest of open commentary, I’d invite you to take a look.
Edutopia (2014). Using Webb’s Depth of Knowledge to Increase Rigor. Accessed from https://www.edutopia.org/blog/webbs-depth-knowledge-increase-rigor-gerald-aungst
Mishra, P., Metha, R. (2017). What we educators get wrong about 21st-century learning: results of a survey. Journal of Digital Learning in Teacher Education, 33:1, 6-19. Accessed from http://www.punyamishra.com/wp-content/uploads/2016/12/Mishra-Mehta-21stCenturyMyths-2016.pdf
Puentedura, R. (n.d.). The SAMR Model: Background and Exemplars. Accessed from http://www.hippasus.com/rrpweblog/archives/2012/08/23/SAMR_BackgroundExemplars.pdf
Rankin, W. (2016, December 9). “Formal” learning. unfolded learning [weblog]. Accessed from: https://unfoldlearning.net/2016/12/09/formal-learning/
William and Flora Hewlett Foundation. (2017, May). Decoding deeper learning in the classroom. Accessed from: https://www.hewlett.org/wp-content/uploads/2017/06/DL-guide.pdf
Late in this past school year, a conversation developed with our high school tech coach, Bea Leiderman, around how we might visualize where a lesson “falls” or “sits” in terms of its proximity to a “deeper learning experience.” Could it be done? She soon reminded me that the model she’s been helping her husband, Dr. Bill Rankin develop, included some different ways to think about the learning experience. Bill’s “Cubic Learning” idea was something that resonated with me. He in turn came to help us for our Strategic Innovation Symposium as the keynote speaker. And two days later, at our admin retreat, he presented the formal cubic learning model to our team.
So the “cube” presents three faces: content, context, and community. To these, I added three additional pathways: tools, rigor, and skills. As complex as instruction and learning are as concepts, we wanted a way to also be able to talk about different components of the learning process. With the Pathways, we have a model that can break down the complexity of learning, especially as we move away from asking students to simply recall information, and have experiences throughout the year when students can more deeply experience content.
At this point in time, we have a Schoology course devoted towards Pathways for high school teachers and have introduced the model to our iPad cohort this summer. Our next steps are to develop a principal walk-though rubric for teacher consults. In addition, student surveys have been introduced to address student reflection and how they felt they were learning along each of the six pathways.
We hope this model helps everyone see where the teachers have designed students to be and ways to differentiate to meet student needs.