Tuesday, November 19, 2013
Short- vs. Long-Term Learning
Time slipped away from me over the past 2 weeks. Between a science conference, a cultural symposium, report cards, the start of coaching, proposal deadlines, and the usual busy-ness that comes with teaching/marking/planning/conferencing, this blog (and, sadly, many of my personal extra-curricular activities) have lately fallen to the wayside.
Which is too bad, because with the results of that second unit test 2 weeks ago have come some pretty interesting traits of my class.
Leading up to the test (and you can read the past few blog entries), I was seeing encouraging signs of progress. Students were working through the material better, experiencing more success, and managing their time more appropriately than they did in the first unit. I was really, really looking forward to the results of the test, hoping I’d have the ability to proudly announce: the system is working!
But alas, that was not to be. The average of the test decreased from 60% to 56%, and while I saw everyone move out of the 11%-30% range (hoorah!), there was also NO increase in the number of students getting over 70% (level 3-4 Ontario standard). In fact, I actually gained a student in the 1-10% range over the unit 1 test.
What is going on? Students had the advantage in this unit: they knew what to expect, knew how to navigate the material, had the benefit of learning from their struggles in unit 1, and yet there was very little noticeable improvement in their grades.
(As an aside, it is a completely different topic as to whether or not we should be using numerical grades to judge the amount of learning our students have accomplished. I do so here as it is currently the way in which our education system works. I’ll leave its validity open for debate at another time.)
One-on-one conferencing with each student after the test shone a bit of light on what was causing the stagnation. For the most part, those who did not do well on the test did not learn the material thoroughly the first time through, and did not spend quality time reviewing leading up to the test. Both of these, to me, are huge flaws in the implementation of a BYOD course like this.
My students look at the unit and chunk out what they need to do to succeed in the quickest way possible: master the learning goal and PROVE IT on the exit slip, all in one fell swoop. For many, as soon as I complete the mini-lecture, they’re asking for the slip. “Go and practice a bit first,” I tell them. “Make sure you solidify what we just learned and make it stick!” Sometimes they do, most of the time they don’t.
This works well for demonstrating mastery immediately on the exit slip, but then when it comes to the test a few weeks later, they don’t take the time to review, assuming that since they “knew” it at one point in time, they’ll be fine for the test. But they rarely are.
What is the solution? I don’t want to go back to marking worksheet upon worksheet that proves the students are practicing. Ideally, I would love to have a peer teacher guarding the binder of exit slips, handing out slips only to those students who can show they’ve done some further practice. Or is this just a process the students have to figure out for themselves?