Effective Feedback

Written by lisawang | Published 2017/10/24
Tech Story Tags: education | edtech | schools | teachers | students

TLDRvia the TL;DR App

Top 3 learnings this week:

  1. Be careful when striking deals. If you say “Yes, fine, if you finish the handout I will get you some tape to fix your shoelace” and then you forget to get the tape, prepare for some outrage.
  2. Middle schoolers are energetic. And if you don’t provide ways to channel that energy, you will end up with at least one overturned desk.
  3. Scarier than vindictive or overactive 8th graders is your own unconscious bias. Diagnosing a student’s lack of engagement purely based on the information at hand is difficult; doing so based on biases is much easier.

In my last post, I focused on teacher training and development. In general, I assume that getting feedback is a significant part of this. But I realized that there’s an email gathering dust in my inbox, filled with feedback from a class observation. While I did read the email, I didn’t act on much of the feedback. Which is weird considering this whole Dora-the-Explorer adventure of mine to try and understand how to improve learning. But it comes down to one main reason: the feedback is too generic for me to feel like it’s useful or actionable, i.e. advising me to employ the technique “Do Not Talk Over” without addressing the very real likelihood that if I stop speaking until I have the class’s full attention, I may never speak again. Feedback needs to be more personalized than this.

And in an actual teacher-school setting, there’s inherent difficulty in having a feedback conversation at all. There’s a power dynamic in any manager-employee discussion that hinders effective communication. Teachers may feel threatened in pushing back on feedback or volunteering novel learning methods. Administrators may feel threatened by challenges to their authority and hard-earned alignment across teachers.

There’s a whole host of other issues, as described in a study by the Carnegie Foundation:

  • Feedback is given infrequently and based on just one sliver of the day. This puts undue pressure on the teacher to “perform” and is not representative. (This OECD study highlights just how little feedback is given across countries.)
  • The expectations around the purpose of feedback are unclear. Is the purpose to judge me? Is it really to help me become a better teacher? If I disagree with your assessment, what are the consequences?
  • Notably, few teachers in the study could recall a single piece of concrete feedback that helped them improve.

Some of these are actually relatively easy to solve. A school in D.C. committed to regular (vs. infrequent) feedback and documented the results here. The methods used are consistent in fighting back on some of the issues identified above. In-class observations were doubled from 20 to 40 times a year. Both the observer and the teacher went into the observations with jointly developed goals and areas of growth. Feedback was structured with open questions to consider and manageable, concrete suggestions to implement in the next class. By the end of the year, the school attributed a growth in proficiency on standardized tests from 28% to 60% on these changes alone.

And if you really want to turbocharge feedback — we can listen to Bill Gates and put cameras into every classroom. (I mean, to be clear, I’m not exactly a fan of this — just because it works for China, doesn’t mean it will for everyone.)

But overall, I think we can agree that if you’re going to try and use feedback for professional development, it’s probably better to do so in a supportive and consistent way. The much more contentious issue is how to provide useful feedback. In a 2009 study across 15,000 teachers in 12 districts in 4 states:

  • Half the districts used a binary rating system for evaluating teachers: satisfactory or unsatisfactory. I’m fairly certain my grades in kindergarten had more nuance than this.
  • 3 out of 4 teachers went through evaluations but did not receive specific feedback on areas to improve in. More concerning, 43% of teachers in their first four years of teaching did not receive specific feedback on areas for improvement.
  • Of those who did have areas of development identified, less than half received actionable next steps to improve. Imagine feedback like “you need to exert more authority in the classroom” without any steps to do so.

Now of course its possible that administrators chose to have developmental discussions off the record in order to maintain a supportive culture — yet almost half of the teachers surveyed did not participate in any type of conversation with their administrator on development or feedback. So it’s not really a surprise that only around 40% of teachers believed that feedback helped them improve.

The really interesting part of all this is that innovating on “performance reviews” in companies has actually gotten quite a bit of action in the tech world. Try searching on TechCrunch and you’ll see what I mean. Zugata, Refkletive, Impraise are all platforms that try to solve the very same problems of effective performance feedback. Larger companies constantly iterate on improving their internal performance evaluation systems and making them less hated. Yet performance reviews in edu seem so far behind what baseline is in the corporate context. Most reviews are done on paper with little record year-over-year, there are few mechanisms for real-time feedback, and the nuance in a review is exceedingly low. So why not apply the tools used in the corporate world to schools?

Well, there’s a few reasons why there are a million apps for student-teacher communication, but I could only find one dedicated to teacher evaluations and feedback. There are three main players when it comes to evaluations: teachers, administrators, and state governments. It seems that improving student performance should be an easy goal to align on — but there’s more nuance to this. Imagine a student who starts the year with a low math grade and a terrible attitude towards school, then ends the year with fewer behavior problems and a similarly low math grade.

A typical assessment is going to treat the progress of this student as equivalent to a student who ends the year with no change at all. But teachers know that growth is so much more than a number on a math test. So student progress is not so easily measurable, and I think that’s why many systems evaluate teachers in binary and over 90% of teachers receive “satisfactory” ratings. It’s the easiest solution to keep all parties happy. Teachers can focus on teaching, administrators desperate to retain teachers don’t risk rioting, and state governments can fulfill the need to seem in control. This would all imply that evaluations are perfunctory most of the time — which we can actually see:

Only when it comes to suspensions or dismissals do evaluations become relevant as dismissal requires evidence of poor teaching. In particular, note the rows for “Professional Development,” “Retention,” and “Tenure” where performance is not taken into account at all. This means evaluations carry a heavy negative association (even more than in the corporate context). And since evaluations are so overwhelmingly positive right now, any change brings the possibility of teachers receiving lower ratings — which makes progress difficult.

In comparison, doing well in a performance evaluation in a corporate work place leads to bonuses and promotions, and impact can more easily measured. Still, I don’t think the differences are so great that similar tools couldn’t be extended with the right nuance. I mean, at the very least, could we get some basic digitization of feedback? And I don’t mean filling out a Word document and saving it to a standard Google Drive or server folder. Perhaps there’s more here that I’m not getting, so hoping to gain some insight by chatting with actual administrators this week.


Published by HackerNoon on 2017/10/24