• Skip to main content

John Hendron

Director of Innovation and Strategy

assessment

Formative Evaluation of… Instruction

by John Hendron · Oct 26, 2017

In thinking about instructional design, and more specifically a systems-derived approach such as the model created by Walter Dick, Lou Carey, and James Cary, I wanted to think about how we as educators improve what we do about student learning, from one iteration to the next.

How to Eat a Lobster - image of placemat

There’s something very personal about teaching students. Our assessment about how learning went, relying of course upon our teaching, is concerned with the individuals for whom we’ve designed the instruction. The same delivery, materials, and information with one group might have very different learning outcomes than with another. That’s a variation most educators understand. Each of us is unique and every student brings a unique set of experiences with them to school. That’s not to say they are all totally different, but they are, either by a small bit, or a lot, unique. That’s because, I believe (as a constructivist), knowledge is something we create in our brains. It’s unique to us because it’s built upon our own experiences, from our own perspectives.

In the so-called Dick and Carey model, they include a loop to “design and conduct formative evaluation of instruction,” at least they do so in their fifth edition of their book (2001, Addison-Wesley). (To be fair, several editions have come out since then.) So to be clear, that’s not “do formative assessment of student learning” but rather “do formative assessment of your instruction.”

In many ways, teachers conduct this type of analysis by checking to see what stuck with students and what did not. What didn’t they remember, or conceptualize? What couldn’t they work with in the context of a project? As much as we rely upon this to correct faulty instruction, it really isn’t an assessment of the instruction as much as it is an assessment of the intended behaviors. It’s easy to see that something didn’t work. But it takes more research to find out why.

That’s why I considered involving student feedback as part of the instruction cycle in projects. In theory finding out what students thought about a learning experience sounds like a good idea, I think. Student preferences may be helpful in designing future instruction, but more so, I believe it would be interesting to know if students experienced their learning in the ways we had designed them.

That said, not all student feedback mechanisms guarantee a change in instructional quality. But it doesn’t mean that the act of collecting feedback is sour; feedback must be reviewed, hopefully understood, and with a plan to adjust and adapt to that feedback.

An Example

In Rankin’s model of cubic learning, he divides the components of learning into three “dimensions,” and in this example, he examines “content” in depth. I think this dimension is easy to reflect upon, especially so in terms of “delivered” to “created.”

Compare:

  • directed. From a piece of literature, I have already identified the main themes. Through a worksheet, I ask you to qualify these themes with examples. After turning it in, or through a class discussion, you discover if your examples are correct and correlate to the themes.
  • created. You’re asked to analyze a piece of literature and compare it to another work by another author from the same period. The paper should cite similarities and differences, but no specific clues are provided.

With full disclosure, these are examples I created independent from the author, so I’m applying my own interpretation to these terms. In the first, I’m not just telling you the answers and asking you to write them down; I’m asking you to do “a little work” to see if you understand what themes are in literature. It’s more or less a comprehension check. It’s directed because I already focused you in on what you were responsible for knowing, or “filling in.” My direction is a type of scaffold.

To move beyond, toward “discovered,” I might say, “there are three major themes in this book. Identify the three themes and provide examples that support your answers.” It’s still black and white information, but now you’re responsible for digging it all up. It’s evaluation, from Bloom’s taxonomy.

In my “created” example your essay (paper, thesis, etc.) is a created work. Analysis is a higher cognitive skill; but you are practicing a skill. The result of that skill is the analysis itself, in the form of a five, or maybe seven-page essay. We might expect a student to go through this content study at the stages that preceded “created” in multiple stages of practice before they’d be ready for “creation.” (For instance, in the sixth grade the concept of themes might be delivered; in seventh grade, they are directed, and in freshman English, you write the paper I described).

Here’s my question: Are students experiencing the level(s) of depth in their learning that you in fact designed? Isn’t the point of the Dick and Carey “instructional formative assessment” to assess the instruction? Generating an intended behavior isn’t a bad check, but are students comfortable at the level we’ve chosen? And what should it mean if they are not? An adjustment should be made, I’d hope we’d surmise. Good feedback might be “this was a boring exercise for me, and I wasn’t challenged.” Or “I struggled with this. I was only successful because you reframed it for me and provided a scaffold.” (Yes, I know I’m betting big that students would talk to us like that, but that’s not my point. My point is the feedback can help us improve instruction, individualize it in some cases, and maybe even provide opportunities to make it personal.)

We Have to Ask

Asking for feedback on your teaching at the end of the year is too late. Many teachers know already what they think their students will say. We ask very general questions and highlights and rough spots are likely smoothed out in such “end of course” questionnaires.

Instead, what if we had three types of surveys.

  1. A pre-learning survey.
  2. A mid-learning feedback exchange.
  3. A post-learning survey.

Consider the third. (And by survey, I don’t mean these all have to be a Google Form. We can survey students by asking them just a few questions.) Things I think we should know are:

  • I understood what I was learning and why,
  • Why did I, or why did I not enjoy the learning experiences,
  • Which level, say, of “content” acquisition was I ready for, and did the one I experience help me toward mastery of the skills and/or content?
  • How successful was I in using feedback to grow my skills and understanding?

Interesting questions. And given more time, I could come up with more.

All the Feedback Loops

  1. Pre-assessment. Looks at student’s connection, familiarity with, and prior experience with the “content.”
  2. Mid-instructional assessment. A few “check in” questions to empower the teacher to modify the instruction based upon student needs.
  3. Post-assessment of instruction. Questions that gauge student satisfaction and perception of their own learning along each “pathway” of deeper learning. Hopefully develops meta-cognitive skills supporting life-long learning.

If a teacher could use feedback loops like this often, there’d be no rationale for an “end of course” survey. Students would know their teacher cares, at least in part because they are responsive to their learning needs, and they design instruction that addresses a student’s prior knowledge and preparedness for depth.

Depth is Good, but When You’re Ready

While I fully believe in the attempt to provide our students deeper learning experiences—meaning, ones that are relevant, have utility, and can be applied to solving real problems—I also recognize that instruction for deeper learning sometimes takes more time, requires higher cognitive engagement by the student, and in the hands of an inexperienced teacher, might just have more risk in terms of students achieving the intended standards of success.

Dick and Carey start with “assessing needs and ID goals,” then a parallel step of “analyzing learners” and “conducting instructional analysis.” In short, I need to know the learners, what they already know, what they need, and what type(s) of support they will require; I need to establish learning goals and then from those provide supporting experience to achieve those goals. The two processes go hand-in-hand. It isn’t new. But I’m wondering how often we actually ask students what they know and to what degree?

If we want more of our students to experience deeper learning (instead of frustration), we have to ask questions. And to analyze our instruction, we have to ask questions too.

Thanks for entertaining a read about my thoughts on deeper learning… For I believe deeper learning is helping maximize the potential of learners but when it is instructionally appropriate. And that’s why “measuring” the depth of learning isn’t a comment on the quality of a teacher’s performance, but rather is a measure of how we are meeting students where they are and prioritizing their needs.

Filed Under: Resource of Interest Tagged With: assessment, deeper, feedback, learning, reflection

Could you use GoogleApps to give a quiz?

by John Hendron · Mar 3, 2009

I am not sure it’s the best use of Google Forms and Spreadsheets, but this teacher will show you how to set up an auto-correcting quiz using Google’s Apps (10 minute screencast).

Filed Under: Resource of Interest Tagged With: assessment, google

This is a blog by a Goochland County Public Schools Employee. © 2021 Goochland County Public Schools · PO Box 169 &middot Goochland, VA 23063 · (804) 556-5623

  • iOS Apps Request
  • About John