Mistakes and Feedforward

Recently, I’ve come across two interesting perspectives on making mistakes.

First, from the TED Radio Hour, this is a great radio show about making mistakes. In addition to being a really captivating human interest story, the radio broadcast also has a lot to say about learning. Views on mistakes come from a physician (“most of the greatest successes in medicine come from failures”), a noted psychology researcher (“if failure is not an option, then we just have a bunch of scared people hanging around loitering on the outside of the arena”), a jazz musician (“a mistake is an opportunity that was missed”), and a corporate coach (“a mistake offers the greatest amount of insight and the largest room for improvement”). This one is really worth a listen.

Second, I found this video in the course of researching something else, and the snippet below caught my attention.

Although the golf example may not be applicable to you, I suspect that many of us are guilty of folding up too quickly in the face of failure. It’s easy to shut down and turn away. When you know your efforts have gone awry, what do you do? What should you do? Letting the scenario play out with a dispassionate eye, observing what happens, and then reflecting on events and devising a new plan are all challenging skills on their own, never mind in the face of your own mistakes.

How can we help students turn their mistakes into valuable learning opportunities? Feedback is key, of course. I’m assuming that timely and individualized feedback is already part of your teaching practice. But what about the content of this feedback? When instructors give feedback, many naturally focus on the assignment in question. While valuable, reviewing the work the student has done is retrospective feedback. Students may be at a loss about how to translate your analysis into actionable steps for the next assignment. Using your feedback becomes more complicated if the next assignment has a different topic or format. What about offering prospective feedback? That is, feedback with specific attention to the work the student will do in the future? I’d like to share with you the idea of feedforward:

Feedforward is another interesting way to think about the feedback offered students. This more future-oriented feedback responds to what the student did, but in light of what needs to be done on the next assignment. Rather than isolated comments, it distills the feedback into three or four specific suggestions that target what the student should work on to improve the next assignment. . . .

Perhaps students would make better use of our feedback if they used that feedback to develop an action plan for the next assignment. “Based on the teacher’s feedback and my own assessment of this work, here’s the three things I plan to improve in the next assignment.” Maybe students don’t get the grade for the first assignment until the action plan has been submitted. And maybe that action plan then gets submitted along with the next assignment. 

Turning past performances into future successes is tricky. Recognizing mistakes and devising a plan for improving upon them requires both meta-cognitive skills and content-specific knowledge. This is the real work of learning and teaching – and where feedback and coaching can play such a crucial role.

How have you helped students to constructively use your feedback? Do you have any strategies that help students draw lessons from their own (or their classmates’) mistakes?

Happy New Semester!

For me, the start of a new school year is a much more significant marker of time than a new calendar year. I like a good New Year’s party, but, in the spirit of getting down to business, let’s talk about resolutions for the new semester. (I’m a super-fun party guest, I promise!)

Do you make resolutions for the start of the school year? Do you have some this year? We’d love to hear about them in the comments!

Or are you open to the idea of mixing things up, trying something new, collecting new data, etc., but aren’t quite sure where to begin?

Continue reading

Bloom’s Taxonomy in the Smartphone and Tablet Era

Bloom’s Taxonomy, the prism through which many of us have been taught to evaluate learning outcomes, is changing. The general trend is to focus more applications of knowledge, rather than knowledge for its own sake. In the updated version, the act of remembering replaces “knowledge” at the base of the triangle. Progressively more intense higher-order thinking skills are enumerated moving upward toward the peak of the triangle, culminating with learners creating knowledge. The act of synthesis, absent in the revised taxonomy, likely has always found its way into the work of analysis, evaluation, and creation.

Of course, Bloom’s taxonomy is not the only way to graphically represent the cognitive processes associated with learning. Other revisions have represented the taxonomy as a series of interlocking gears, a blooming rose (hah!), and a feathered bird – just to name a few. Notably, Rex Heer at Iowa State’s Center for Excellence in Teaching has created a nice flash model of the way that the cognitive processes mesh with different levels of knowledge that learners will need to complete the task. It’s definitely worth clicking on the link in the previous sentence and then playing a little with the interactive graphic.

Recently, I’ve run across a few revisions of Bloom’s taxonomy focusing just on the apps available for tablets and smartphones. Check out the Padogogy Wheel:

I love the multitude of apps identified above – what better way to find something new that just might work for you? However, if you’re looking for something a little cleaner, here’s a nice distillation:

Of course, not all the apps are appropriate for your content, level, or course goals. You might use the above information to guide students when helping them select research, presentation, or study tools. If you’re currently using – or contemplating using – one of the apps identified above, this should give you a sense of the cognitive range of the app. Last, if you’re not yet sure how smartphones and tablets fit into the classroom and individual learning processes, hopefully these graphics have provided some food for thought.

If you’re looking for still more apps, the Koehler Center has a list of both discipline-specific and general study, writing, and document management apps. In addition, some of the web 2.0 tools identified on our website also have their own apps.

Apps on tablets and smartphones are more than just cool tools. In particular, some apps make it very easy for students to transition from sophisticated curators of knowledge of innovative creators of knowledge – that is, to ascend to the highest level in the revised Bloom’s taxonomy.

Do you have an app you love to use with your students? What impact does this app have on student learning? Alternately, is there an app that would benefit your students’ learning that you’d love to create?

Knowledge Acquisition

In thinking about the ways in which we can ask our students to do more with course content, I recently ran across the graphic below. The image illustrates the different ways in which knowledge can be acquired and subsequently processed (PKM in this context stands for personal knowledge management).

Flowchart graphic showing three main routes of knowledge acquisition: seek, share, sense.

Image credit: http://www.jarche.com/2013/05/sense-making-in-practice/
Based on content from the book You Can Do Anything by James Mangan.

My favorite method above is “walk around it.” While this may work in an experimental setting or with physical artifacts, this is a trickier approach for abstract concepts. I like to think “walk around it” in this context might mean something like “How can I think about this theory or problem differently?” or “Coming at this issue from another perspective, I find that. . .”

Seeing the options for knowledge acquisition laid out like this illustrates the wide variety of learning experiences. Student interaction with course content is richer than scribbling notes during lecture and then writing a paper or taking an exam. Of course, well-crafted writing prompts and exam questions may ask students to do some of the things in the graphic above. However, if the first time all students are being asked to actively draw upon their course knowledge is the paper or the exam, well, that may have predictable results for some of them.

The trick is to incorporate active learning experiences that reach all students long before the major paper, exams, or other grading milestones. In the abstract, we all know this: student engagement and success in the course are both likely to be higher if all students are asked to evaluate and apply course concepts along the way. In the trenches of the day-to-day class sessions, though, it’s easy to lose sight of this – especially in the context of the amount of material that has to be covered throughout the term.

In the coming weeks and months, we’ll be exploring active learning opportunities and showcasing some ways to mix up your content presentation, boost student engagement, and help you and your students get the most from peer- and small-group learning.

Evaluating Individual Contributions to Group Assignments

Instructional designer Debbie Morrison has an interesting piece discussing different strategies for how your students might evaluate one another upon the conclusion of a group project. While the article focuses on peer evaluation strategies for online learning, everything in the discussion is equally applicable to face-to-face teaching.

The author concludes that the existence of a peer evaluation is rarely a motivating factor for quality participation. However, peer evaluations do a serve a purpose in providing an opportunity for group members to express their dissatisfaction with other students in the group. The piece then addresses how instructors might handle the negative comments that students might make about other group members.

Her preferred strategy for assessing individual contributions to group projects? Self-evaluations:

I believe the learner will benefit far more by completing a self evaluation (that is well crafted to include focused self reflection questions) that forces him or her, to examine how he or she contributed [or did not] to the group process. The tool also encourages the student to consider actions that he or she demonstrated to support the team and to estimate what percentage of the work he or she contributed to the project.  ‘Forcing’ the individual student to assess their own behaviour, as opposed to others is more constructive – it supports the aim of developing collaboration skills, along with the knowledge component.

What do you think? Did you use peer- or self-evaluations for group assignments this semester? Were you happy with the feedback your students provided?

End-of-Course Evauations

If you’re TCU faculty, let this serve as your umpteenth notice to remind your students to complete their eSPOTs.

Once the students complete the eSPOTs (or whatever version of course / teaching evaluation your campus uses), then what? Well, there’s the inevitable waiting until you get the results, of course. When the results do finally come your way, this piece about making sense of student comments may be helpful. In particular, it’s useful to think about how students define particular criteria. After all, for student feedback to be part of a meaningful process of pedagogical improvement, some sense of how students might have understood the survey questions is worth considering.

If you’d like to gather more robust information from your students, you might consider an additional evaluative exercise. ProfHacker suggests that you have your current students write a letter to your future students. The comments on that blog post are also valuable, including the discussion about sharing the findings with your future students. Alternately, you could craft an exercise that provided feedback about your teaching and helped your students gain awareness of their learning habits. Not sure how to do this? The link provides some sample questions to get you started.

Anonymity can be tricky to maintain with these additional exercises. To encourage participation, you might offer a small amount of extra credit (or credit toward a specific assignment) if a predetermined portion of the class completes the exercise. In an online class, you could use an anonymous online form. We’ve discussed some of the options for online anonymous teaching surveys in an earlier post. In a face-to-face class, you could also use the online option or you could have your students type responses that they turn in to you – but stress that they are to leave all identifying information off the papers.

Getting the feedback is great, and making reasonable changes is part of the ongoing craft. But what makes professors seem wonderfully responsive? When professors close the loop and report back to students how they are using student feedback. Ideally, you’ve already done this with mid-semester evaluations in your course. If not, all is not lost. Of course, the students making the end-of-course suggestions won’t usually benefit from changes you may make in your future courses. However, for your future students, the simple act of indicating that you’ve changed a reading, activity, assignment, or policy in response to student feedback communicates that you are approachable and invested in student learning.

Do you have other course evaluation tips or practices? Please share in the comments!

Link

A Closer Look at Multiple Choice Tests

The new semester is officially underway–students are back, campus is bustling, and classrooms are full. Of course, faculty have been preparing for classes for quite some time now–so it feels like we’ve been “back” for much longer than a few days–and the educational corner of the Internet has been full of assignment and classroom management suggestions.

The folks over at ProfHacker always have great ideas, but this guest post by Jonathan Sterne, an Associate Professor in the Department of Art History and Communication Studies at McGill University, contains some strategies that may be of particular interest to TCU faculty teaching large sections and/or using iClickers.

Sterne offers a solid strategy for developing multiple choice exams, and while he pitches the quizzes as an alternative to using clickers in large sections, I think the two methods could be easily combined. One could adopt Sterne’s test-writing methods to generate clicker polling activities for students, including the “semi-open book” technique.

What are your thoughts? If you use clickers on TCU’s campus, have you ever tried a method like the one Sterne describes? If not, what are some strategies you’ve found particularly successful?