|Writing tests with laptops/tablets/phones...|
After our first unit of BYOD, proficiency-based, independent learning, the students in my class wrote their first test on Friday & today. Here are the results:
That's quite the distribution. Upon first reflection, I'm not sure it's much different than any math class I've taught previously.
If you're a stat-head like me, then read on...
Out of the 26 students that wrote the test:
- 19 (73% of class) passed
- 10 (38% of class) received level 3+ (measure of success in Ontario standardized tests; equivalent to 70%+)
- Average grade: 62%
- Median grade: 62%
- Mode: 60% and 100% (2 students each)
- Standard Deviation: 25.4%
Some random thoughts:
The fact that the mean and median are identical tells us that on the whole the data was spread evenly above and below the mean (ie. there's not one piece of data really dragging the central tendency up or down). The large standard deviation also indicates the large spread in results.
I'm not sure how to interpret the success rates, though. I would expect to see a measurable proportion of the students achieving high marks, since some students came "down" to this level of math from the academic course last year. Identically, I would expect to see a measurable proportion of the students achieving low marks, since they moved "up" from the applied course last year. We see both of these trends in the data.
I'm elated 10 of my students earned 70%+ on the test, but my attention turns to the other 16 students who I would like to see improve. Upon self-reflection, many of the students who struggled felt there was "too much" on the test (as in, too many learning goals (8) were covered) and it was hard for them to remember everything (see my post: Checkpoint). Others just never got to the later learning goals, and instead of trying to learn on their own, chose instead to just prepare the first 5 or 6 concepts for the test.
The modal group of 51-60% bothers me the most. To me, this indicates students who get most of what we do, but either through a lack of practice, lack of review, or not making it to the end of the learning goals, do not excel on cumulative evaluations. How can I get this group engaged?
For the next unit test, I would like to raise the test pass rate from 73% to 85%, and raise the provincial standard success rate from 38% to 50%. I have a couple of strategies in mind (see, again, my post: Checkpoint), so we'll see how it goes!
I would also like to start tracking (ack! more tracking!) which device each student uses most, and see if that has a correlation with their mark. Does it matter if a student prefers a laptop or a smart phone? Whether it is their own device or a school-owned device?
Afterthought - Two students, who joined the class only a few days before the test, wrote the test as a take-home assignment. Since the test, two more students have joined my class, bringing my class total to 30. I'll still try for the same percentages, though... the more, the merrier!