• Skip to main content

John Hendron

Director of Innovation and Strategy

learning

Formative Evaluation of… Instruction

by John Hendron · Oct 26, 2017

In thinking about instructional design, and more specifically a systems-derived approach such as the model created by Walter Dick, Lou Carey, and James Cary, I wanted to think about how we as educators improve what we do about student learning, from one iteration to the next.

How to Eat a Lobster - image of placemat

There’s something very personal about teaching students. Our assessment about how learning went, relying of course upon our teaching, is concerned with the individuals for whom we’ve designed the instruction. The same delivery, materials, and information with one group might have very different learning outcomes than with another. That’s a variation most educators understand. Each of us is unique and every student brings a unique set of experiences with them to school. That’s not to say they are all totally different, but they are, either by a small bit, or a lot, unique. That’s because, I believe (as a constructivist), knowledge is something we create in our brains. It’s unique to us because it’s built upon our own experiences, from our own perspectives.

In the so-called Dick and Carey model, they include a loop to “design and conduct formative evaluation of instruction,” at least they do so in their fifth edition of their book (2001, Addison-Wesley). (To be fair, several editions have come out since then.) So to be clear, that’s not “do formative assessment of student learning” but rather “do formative assessment of your instruction.”

In many ways, teachers conduct this type of analysis by checking to see what stuck with students and what did not. What didn’t they remember, or conceptualize? What couldn’t they work with in the context of a project? As much as we rely upon this to correct faulty instruction, it really isn’t an assessment of the instruction as much as it is an assessment of the intended behaviors. It’s easy to see that something didn’t work. But it takes more research to find out why.

That’s why I considered involving student feedback as part of the instruction cycle in projects. In theory finding out what students thought about a learning experience sounds like a good idea, I think. Student preferences may be helpful in designing future instruction, but more so, I believe it would be interesting to know if students experienced their learning in the ways we had designed them.

That said, not all student feedback mechanisms guarantee a change in instructional quality. But it doesn’t mean that the act of collecting feedback is sour; feedback must be reviewed, hopefully understood, and with a plan to adjust and adapt to that feedback.

An Example

In Rankin’s model of cubic learning, he divides the components of learning into three “dimensions,” and in this example, he examines “content” in depth. I think this dimension is easy to reflect upon, especially so in terms of “delivered” to “created.”

Compare:

  • directed. From a piece of literature, I have already identified the main themes. Through a worksheet, I ask you to qualify these themes with examples. After turning it in, or through a class discussion, you discover if your examples are correct and correlate to the themes.
  • created. You’re asked to analyze a piece of literature and compare it to another work by another author from the same period. The paper should cite similarities and differences, but no specific clues are provided.

With full disclosure, these are examples I created independent from the author, so I’m applying my own interpretation to these terms. In the first, I’m not just telling you the answers and asking you to write them down; I’m asking you to do “a little work” to see if you understand what themes are in literature. It’s more or less a comprehension check. It’s directed because I already focused you in on what you were responsible for knowing, or “filling in.” My direction is a type of scaffold.

To move beyond, toward “discovered,” I might say, “there are three major themes in this book. Identify the three themes and provide examples that support your answers.” It’s still black and white information, but now you’re responsible for digging it all up. It’s evaluation, from Bloom’s taxonomy.

In my “created” example your essay (paper, thesis, etc.) is a created work. Analysis is a higher cognitive skill; but you are practicing a skill. The result of that skill is the analysis itself, in the form of a five, or maybe seven-page essay. We might expect a student to go through this content study at the stages that preceded “created” in multiple stages of practice before they’d be ready for “creation.” (For instance, in the sixth grade the concept of themes might be delivered; in seventh grade, they are directed, and in freshman English, you write the paper I described).

Here’s my question: Are students experiencing the level(s) of depth in their learning that you in fact designed? Isn’t the point of the Dick and Carey “instructional formative assessment” to assess the instruction? Generating an intended behavior isn’t a bad check, but are students comfortable at the level we’ve chosen? And what should it mean if they are not? An adjustment should be made, I’d hope we’d surmise. Good feedback might be “this was a boring exercise for me, and I wasn’t challenged.” Or “I struggled with this. I was only successful because you reframed it for me and provided a scaffold.” (Yes, I know I’m betting big that students would talk to us like that, but that’s not my point. My point is the feedback can help us improve instruction, individualize it in some cases, and maybe even provide opportunities to make it personal.)

We Have to Ask

Asking for feedback on your teaching at the end of the year is too late. Many teachers know already what they think their students will say. We ask very general questions and highlights and rough spots are likely smoothed out in such “end of course” questionnaires.

Instead, what if we had three types of surveys.

  1. A pre-learning survey.
  2. A mid-learning feedback exchange.
  3. A post-learning survey.

Consider the third. (And by survey, I don’t mean these all have to be a Google Form. We can survey students by asking them just a few questions.) Things I think we should know are:

  • I understood what I was learning and why,
  • Why did I, or why did I not enjoy the learning experiences,
  • Which level, say, of “content” acquisition was I ready for, and did the one I experience help me toward mastery of the skills and/or content?
  • How successful was I in using feedback to grow my skills and understanding?

Interesting questions. And given more time, I could come up with more.

All the Feedback Loops

  1. Pre-assessment. Looks at student’s connection, familiarity with, and prior experience with the “content.”
  2. Mid-instructional assessment. A few “check in” questions to empower the teacher to modify the instruction based upon student needs.
  3. Post-assessment of instruction. Questions that gauge student satisfaction and perception of their own learning along each “pathway” of deeper learning. Hopefully develops meta-cognitive skills supporting life-long learning.

If a teacher could use feedback loops like this often, there’d be no rationale for an “end of course” survey. Students would know their teacher cares, at least in part because they are responsive to their learning needs, and they design instruction that addresses a student’s prior knowledge and preparedness for depth.

Depth is Good, but When You’re Ready

While I fully believe in the attempt to provide our students deeper learning experiences—meaning, ones that are relevant, have utility, and can be applied to solving real problems—I also recognize that instruction for deeper learning sometimes takes more time, requires higher cognitive engagement by the student, and in the hands of an inexperienced teacher, might just have more risk in terms of students achieving the intended standards of success.

Dick and Carey start with “assessing needs and ID goals,” then a parallel step of “analyzing learners” and “conducting instructional analysis.” In short, I need to know the learners, what they already know, what they need, and what type(s) of support they will require; I need to establish learning goals and then from those provide supporting experience to achieve those goals. The two processes go hand-in-hand. It isn’t new. But I’m wondering how often we actually ask students what they know and to what degree?

If we want more of our students to experience deeper learning (instead of frustration), we have to ask questions. And to analyze our instruction, we have to ask questions too.

Thanks for entertaining a read about my thoughts on deeper learning… For I believe deeper learning is helping maximize the potential of learners but when it is instructionally appropriate. And that’s why “measuring” the depth of learning isn’t a comment on the quality of a teacher’s performance, but rather is a measure of how we are meeting students where they are and prioritizing their needs.

Filed Under: Resource of Interest Tagged With: assessment, deeper, feedback, learning, reflection

Visualizing Depth of Learning

by John Hendron · Oct 25, 2017

I firmly believe in a very qualitative approach toward describing deeper learning. We can see deeper learning through student interviews and reflections, observing the work being done by students in and outside classrooms, and through the products students produce to demonstrate or apply their learning. And, of course, a project-based approach toward learning often includes a product.

Earlier this summer, I wanted to help demystify deeper learning through a model that looked at different facets (ingredients, themes, components) that define a deeper approach toward learning. I called these Pathways Toward Deeper Learning and have lately wanted to think about how to visualize and quantify these pathways when we see learning in action.

Splotches of paint

I wanted to show depth in a visual, like going down into a cavern, or going towards the core of the Earth, or… maybe drilling down into a “mountain of knowledge.” Figuring out how to take data and map it to something like that was going to take a lot of time for what I finally admitted was a cute gimmick.

Then, in talking to Bill Rankin, he showed me a developing model he was creating called cubic learning. And he’d already begun to think about how to look at learning through some visual means. We liked the three planes of learning into how I saw deeper learning.

Then the suggestion was made (by my peer Sean Campbell) to consider a radar plot. I’d thought about this too, but wasn’t sure it was adequate in two dimensions for showing depth. But if we stop and think of this less as a three-dimensional concept in our mind (a play on words, really), then let’s think about area. That’s when I thought about paint or ink splotches you might see on the wall (or an art classroom). Then the question becomes, how big is your paint blob?

Radar chart with 6 pathways

If I then take the scores (using a four-point scale) and plot them, I get the outline of my paint blob.

There’s a few goals with this. We typically want to match-up our pathways, so that we’re matching a level 3 in each area across the board. But I also want to look at a lesson in a numerical sense, so beyond the aid of the radar plot, I want an overall metric for the “depth” of the observed learning.

Showing that by itself is non-sensical, unless you want to see it in relation to a larger scale. But to take multiple learning experiences and to compare them by depth, this index value might be interesting.

To compute the index value, I grouped the content and Depth of Knowledge pathways together as an overall “Content” value. This gets multiplied by the average of the three twenty-first century skills ratings across Mishra’s three groupings and the “context” score. And finally, we multiply that to the square root of the technology score and the community score. The resulting range of results goes from 4 to 448 (see more on the formula, below). If we really wanted to visualize that, we could consider this index score, say, an area factor for a circle (bubble) plot. More interesting, say, to a teacher, might be a series of lessons, all computed this way, with the values presented as a line chart or sparkline.

I will continue to tinker with the ideas behind how to capture depth of learning and how to communicate this. I believe qualitative data is also important, and most important for teachers, I believe, is to know where their design of learning fits into what they feel students need. In generating these scores, it doesn’t speak to good instruction versus bad. It’s about a way to conceptualize depth in learning to compare one activity, lesson, or unit to another.

Instructional Depth Index

Among the pathways, I see pairings. And I wanted these pairings weighted.

To compute “twenty-first century skills,” the evaluator rates, on a scale of 1 (no skills present) to 4 (mastery level), on how students are exhibiting foundational skills, meta skills, and humanistic skills. Then we average these three ratings to generate a “skills” score.

I balance content and depth of knowledge. The maximum score is an 8.

I unbalance twenty first century skills with context, generating a top score of a 7.

Then in combining community and tools (think: resources), I believe the wetware outweighs the digital code and hardware by an entire power, so I used a square root. This makes the top tech score a 2, and the top social/community score a 4.

The equation looks like this:

IDS = (content + DOK) * (context + average of skills) * (community + sqrt(technology use))

This is admittedly an evolution of my thinking about this and I’m indebted to Dr. Rankin for ways we can think about modeling or seeing learning among different facets.

Filed Under: Resource of Interest Tagged With: deeper, g21, learning, pathways

Observing Deeper Learning

by John Hendron · Oct 25, 2017

I’d like a tool to help us see and measure what we mean by Deeper Learning. There are a lot of definitions for deeper learning, and there are a lot of ways and models for measuring components of teaching. Learning is more difficult, but there are models there too.

I’m not sure this is great or not, but our leadership team this summer was inspired by a presentation by Dr. Rankin (at our strategic innovation symposium) where he shared his model for learning he terms “cubic learning.” What was interesting for me was that we never really said much about the “tools.” It’s not a “how are you using your iPad or laptop” model, it was a look at how we learn in a formalized way.

Bill’s cube helped me see learning as facets. I chose the word “pathways” to consider because I see teachers emphasizing some things over others. It may be deliberate, or it may be a strength. I tried taking all these ideas into something that could tell us “how deep is it?”

I hope to start sharing this soon within Goochland and try to refine it more. In the interest of open commentary, I’d invite you to take a look.

  • Observation Form
  • Understanding the Model

For further reading

Edutopia (2014). Using Webb’s Depth of Knowledge to Increase Rigor. Accessed from https://www.edutopia.org/blog/webbs-depth-knowledge-increase-rigor-gerald-aungst

Mishra, P., Metha, R. (2017). What we educators get wrong about 21st-century learning: results of a survey. Journal of Digital Learning in Teacher Education, 33:1, 6-19. Accessed from http://www.punyamishra.com/wp-content/uploads/2016/12/Mishra-Mehta-21stCenturyMyths-2016.pdf

Puentedura, R. (n.d.). The SAMR Model: Background and Exemplars. Accessed from http://www.hippasus.com/rrpweblog/archives/2012/08/23/SAMR_BackgroundExemplars.pdf

Rankin, W. (2016, December 9). “Formal” learning. unfolded learning [weblog]. Accessed from: https://unfoldlearning.net/2016/12/09/formal-learning/

William and Flora Hewlett Foundation. (2017, May). Decoding deeper learning in the classroom. Accessed from: https://www.hewlett.org/wp-content/uploads/2017/06/DL-guide.pdf

Filed Under: Resource of Interest Tagged With: cubic, deeper, g21, learning, pathways

VSTE 2015 Presentation

by John Hendron · Dec 2, 2015

I will be presenting at the 30th Annual VSTE Conference in Roanoke, Virginia. My presentation title is On the Road to Deeper Learning, and it will focus upon the vision behind our 1:1 program.

You may watch the recorded version of the presentation, here.

Filed Under: Resource of Interest Tagged With: 1:1, deeper, learning, presentation, VSTE

Virginia ASCD Presentation – December 2015

by John Hendron · Dec 2, 2015

Deeper Learning through Projects, Personalization, and Play

I will be presenting at 12:45 PM on Thursday, December 3 in Williamsburg. The theme of this presentation came from the article that Dr. Gretz and I had accepted into Virginia Educational Leadership this past spring.

Watch the presentation here.

Filed Under: Resource of Interest Tagged With: 1:1, deeper, ipad, learning, presentation

Homemade

by John Hendron · Mar 14, 2015

I am preparing to move soon, and am going through a lot of cruft that I’ve held onto for a number of years. I’m reading a book, actually, on how to let go of some of this stuff, and not surprising to some who know me, I’m taking the time to “digitize” some of the stuff I can’t stand to part with. This is such an example.

IMS

In 1986, I attended Ingomar Middle School in the northern suburbs of Pittsburgh, PA. I have to say, of all the years of going to school, this was the best for me. I know it was a combination of caring, awesome teachers, but also knowing a number of kids who I could relate with. I did well in the 6th grade and I hated to leave the next year when my family moved to the Cleveland, OH, area.

The object which I scanned above is a coaster of some sort. We had to take a home economics class, and we learned how to cross-stitch. This was something I created and I have not been able to throw it away since middle school. In part, I have positive feelings about this school, as I have shared. But if it were, say, a concert program, or a report card, I’d look at it, then toss it. But this is different. This was something I put my mind to, my hands to, this is something I made. The object is homemade.

My sentimentality aside, I think it’s worth noting for the sake of this blog post the importance we place on objects we create. There’s a mental distinction I think between a worksheet we fill out, and something bigger, say, like this coaster. The apron I made later in the 7th grade in Ohio wasn’t as good as this in terms of craftsmanship (although, I am sad to admit, I too still own). But this was something I look back on as an object representing some personal success. I learned a new skill, I tried my hand at it, and wow, it had utility beyond, well, a worksheet.

I am not sure what the magic is between an object like this, and say, the worksheet. But in this case, if it was the color, the yarn, the texture, and the perceived utility behind it, it mattered to me. I wonder what my teacher, Mrs. Conrad, would think of me keeping this for so long. What did she intend for her students to do with these, when, say, they’d go to high school? College? Toss them away, no doubt.

I think it may be time to say goodbye to this part of my past, but not before I find value in keeping it so long. As educators, I think we have a duty to give students the opportunity to create things that resonate with them and mean something, well, personal. It won’t always be a physical object, but those are easiest to persist the age of time. It illustrates for me, again, the nuance between personalized and individualized learning. Facts are remembered and forgotten, unused. Emotions remain with us forever, even if it requires holding or touching something from our past.

Filed Under: General News Tagged With: learning

New Research Explains Teen Behaviors and Brains

by John Hendron · Jan 28, 2015

In driving back to the office from GES today, I caught a bit of an Fresh Air episode on WCVE radio. The full story is available for you to listen to here with host Terry Gross.

I’m always fascinated to learn more about how science, like cognitive science, supports or refutes hunches and practices about learning and teaching. It’s always refreshing to know that a successful instructional practice can be supported through research in neuroscience.

This story reinforced for us that despite the size of our high school students, their brain development and capability are different than what we see in adults. They have some advantages and disadvantages. Especially interesting was the part about new ways of educating medical students to be self-learners, which supports my preference for supporting inquiry in the classroom.

Filed Under: General News Tagged With: learning, NPR, radio

Next Page »

This is a blog by a Goochland County Public Schools Employee. © 2021 Goochland County Public Schools · PO Box 169 &middot Goochland, VA 23063 · (804) 556-5623

  • iOS Apps Request
  • About John