These are things all good teachers should be doing (as was drilled into my head at teachers' college), but it often falls to the wayside in the busy-ness of just keeping up with the day-to-day teaching tasks. In fact, as much as I benefited from the extra tracking in my math class, I couldn't get around to the same level of tracking in my other classes. But I felt I had to in the BYOD course.
The reason for this is simple -
I had a lot more riding on the success of this BYOD class: if I was going to go to the trouble of changing the entire course, and subjecting my students to learning in a way completely unlike anything they had experienced until now, I had better be able to show that it was worth it.
One could argue that improved test scores indicate success - the better the understanding of the material, the higher students would score on tests. Fair enough. I know tests don't tell the whole story, but test results are easy enough to obtain, so let's look at that.
Here is a graph of how my students did on our four unit tests across the semester. Each line is one student; the tests go in chronological order, and do not necessarily increase in difficulty, as each test covered different topics. To avoid identification of any one student, any student who joined the course after the first test, or dropped the course before the fourth test, was omitted from the data.
|What a mess!|
What I would have LOVED to see is a general upward trend from test 1 to test 4, indicating more comfort with independent learning, and that students had, over time, found their rhythm in the course. An increase in confidence should have lead to better performance on subsequent tests. Instead, I'm getting this scattered mess (and what the heck happened on test 3??).
What else do these test results tell us? Nothing surprising, and nothing unlike what you'd see in any other math course:
- Many students found the content to be challenging the whole way through the course.
- Those who tended to do well on tests at the beginning of the course, continued to do well on tests throughout.
- Those who tended to get below 50% at the beginning, continued to get low results on tests throughout.
Initially, I was devastated.
As a whole, the class did not improve in their ability to succeed on tests. Was my BYOD experiment a failure? Did I do these students a huge disservice by switching to independent and proficiency-based learning? What did I do wrong?
But then I got thinking: this doesn't mean that my students didn't get better at math over the semester (they did), or that they didn't improve their inquiry skills (many of them did), or that they were less willing to take risks (indeed, I found the opposite). It really just speaks to my students' test-taking skills, which did not improve.
BYOD is not meant to make students better test-takers.
It is meant to make students better collaborators, better problem-solvers, and better learners. My students became more comfortable with investigative tasks and communicating their discoveries. They became more resilient, figured out how they best learn, and how to best demonstrate what they learned. A summative test is not always the best demonstration (and certainly not the one my students would choose, if given the choice).
If I want my students to do better on tests, I need to teach them how to do better on tests. If I want my students to be better life-long learners and leaders in their fields, I need to teach them those skills. Test preparation is but a part of that.
Test-taking is important - many skills are still evaluated this way as students make their way into college and university - but it is definitely not the whole picture. As I prepare my new BYOD math class for our first unit test later this week, I'll be keeping this in mind.