Tuesday, July 27, 2010

Learning Students' Names

I've said it before: the two most important words to anyone are their first and last names.

I've always struggled some with learning and remembering names.  Although I've definitely improved this skill as a teacher, I try to start off each year with flash cards to ensure I fully connect student faces with names.

My first two years, I did this by taking pictures of students one by one as they filled out their "getting to know you" sheets on the first day.  I always gave students the option of opting out, but at my first real teaching job, the freshmen thought it was creepy I wanted to take their pictures.  I ended up with less than half my students on flash cards and spent my first month getting names down.

So the next year I devised an even better strategy.  It has more set up, but in less than one week I can learn every student's first and last names, and I know them before the first day of school.  Students know on the first day that I care enough to learn their names, and that if they try to pull anything on that first day I can call them out on it1.

Here's how you go about this if you have eSIS.  Open up your eSIS and make sure your classes have students in them.  Then click on the blue arrow at the bottom right hand corner of the screen named "Navigate."
This will bring up a small window.  Go to the "reports tab and double-click on "Demographics,"
then double-click on "Photo Report."
This will open a new window with a number of options.  To get a report with just the students in your class, click on "Specify Extract Criteria."
This will pull up another window.  Under the "Courses" tab, put in your name under "Teacher," then click "Run Extreact.
Boom, you have a report with students names and last year's school photos.  Some programs that read .pdf files will let you select the pictures and save them.  Acrobat and Foxit do not, so I use a free screen shot program called Screen Hunter 5.  I select the pictures and save individual image files of each student with their name as the file name.  Of course, it's important to copy the names correctly, or else you'll end up calling Camren, your student, Carmen for the first three days of school.

I then use a program called Virtual Flashcards to create cards with the pictures and names, so I actually have to type out the complete name when I study them.  It also displays statistics on your study history after each set and can determine which cards you need to study more.  The company that created the program is no longer supporting it, so you can't upgrade from the free version, which is unfortunate because while you can make as many flashcard sets as you like, they are limited to 20 cards and there's a nag screen that comes up while you're studying.

Last year, I studied sporadically through Labor Day weekend (maybe 2 hours a day) and was good to go for the first day of school.  If you're crazy enough to try it too, hopefully this post will help.

1. I'm not sure which would be creepier to the freshmen: that I take their pictures, or that I know their names before even meeting them. Hopefully the second one. Go back

Monday, July 19, 2010

Chappuis, Chapter 2: Where Are We Going?

In Chapter 1 of Seven Strategies of Assessment for Learning, Chappuis presents readers with her seven strategies:
  1. Provide students with a clear and understandable vision of the learning target.
  2. Use examples and models of strong and weak work.
  3. Offer regular, descriptive feedback.
  4. Teach students to self-assess and set goals.
  5. Design lessons to focus on one learning target or aspect of quality at a time.
  6. Teach students focused revision.
  7. Engage students in self-reflection, and let them keep track of and share their learning. (Chappuis, 2009, p. 12)
In Chapter 2, Chappuis begins moving students to meaningful self assessment by establishing clear learning goals for them to pursue. She points out that strategies three through seven rely heavily on the first two - without successful implementation of the first two strategies, the rest will not be successful either.

Provide students with a clear vision of the learning target

For the first strategy, Chappuis (2009) gives three steps (p. 22).

1. Share the learning target with students

Many teachers I know, including myself, post learning targets for their students on a whiteboard/blackboard or towards the beginning of class with a digital projector. Yet often we make the mistake (myself included) of using eduspeak:
Today's Learning Goals:
  • EL.08.RE.05 Match reading to purpose--location of information, full comprehension, and personal enjoyment.
As soon as the student hits EL.08.RE.05, their attention turns elsewhere. Not to mention the whole "match reading to purpose" bit; I didn't learn what that meant until college, despite the fact I'd been doing it since I started reading.

2. Use language students understand

Instead, it would be much better to say:
Today we will learn:
  • How to determine a goal or purpose for reading a text: looking for information, understanding a point of view, or for entertainment.
Chappuis has some pointers when rephrasing a learning target. She suggests stating the goal within the sentence structure "I am learning to _____________" or "we are learning to ___________." She also states that teachers should identify words or phrases that need clarification for their age group and use a dictionary definition to help decide the best way to simplify them.

3. Introduce students to the language and concepts of the rubrics you use

Chappuis encourages the use of rubrics for all the learning goals. Taken at face value, this could seem painstaking at best and insurmountable at worst. However, Chappuis stresses that the rubric should be both general and descriptive; they should be applicable to different assignments and use "language that explains characteristics of . . . performance at increasing levels of quality" (p. 40).

In the past few years in the blogosphere, at least, rubics have gotten a bad rap. I have at least a couple unpublished drafts that defend the rubic, but I'm forced to make their case here. Good.

The firestorm started shortly after Alfie Kohn published "The Trouble with Rubrics" in the March 2006 edition of English Journal. Kohn correctly admonishes teachers who use rubrics solely because they "make assessing student work quick and efficient, and they help teachers to justify to parents and others the grades that they assign to students" (Andrade 2000 as cited in Kohn 2006). There's plenty wrong with that statement that one can read about in Kohn's article.

He continues by stating that rubrics should only make up one piece of assessment because they too exactly spell out just what students need to do in order to squeak by, stifling the pursuit of learning. This, Kohn (2006) argues, is why "all bets are off if students are given the rubrics and asked to navigate by them" (p. 13), author's italics. I agree that this is a common symptom of students in our schools, but I doubt rubrics are the sole culprit. Any form of grading system that tells students they need to receive so many points in order to receive n grade is asking for trouble. "Smart" students care about grades. The "dumb" ones are smart enough to know that grades don't really matter - it's whether what they're learning is relevant enough and they are proficient enough to survive in society.

Rubrics break down complex processes that take years to master into bite size chunks that help students understand how they can improve. Grading writing (or anything) holistically is acceptable only summatively, and for me that means at the end of the year.

The best, most understandable means of formative assessment for students is that presented in rubric form. As Kohn (2006) says, "it matters whether the objective is to (1) rank kids against one another, (2) provide an extrinsic inducement for them to try harder, or (3) offer feedback that will help them become more adept at, and excited about, what they’re doing" (p. 14).

It would be interesting to get Kohn and Chappuis in a room together; Kohn (2006) writes "any form of assessment that encourages students to keep asking, 'How am I doing?' is likely to change how they look at themselves and at what they’re learning, usually for the worse" (p. 14), my italics. I'll reserve final judgment until I finish Seven Strategies, but in my mind Chappuis is winning.

Her rubrics seem to deal with many of the causes of Kohn's concern. The rubric should avoid evaluative ("excellent organization of thoughts") or quantitative ("thoughts are organized into three paragraphs") language and focus on descriptive ("the writing is organized using any of the various conventions of language [punctuation, sentence fluency, paragraph breaks, headings] to make thoughts easily understandable to the reader") and diagnostic language. Evaluative and quantitative language focus on the work, while descriptive language focuses on the learning.

Use examples of strong and weak work

The creation of rubrics moves fluidly into Chappuis' second strategy.

Showing students examples of strong and weak work allow them to compare their own work to a final product. In addition, Chappuis suggests having students choose weak and strong examples from a number of examples and state why one is strong or weak, either with a set rubric or as a precursor to creating a rubric together as a class.

Chappuis also emphasizes how important it is that the work shown to students is not from their class. In the past I have shown work from students in the classroom, thinking they would not reveal their identity. Unfortunately, this was not always the case. More than once, a student's first response to seeing their work on the screen is “hey, that's mine!” Whatever lengths the teacher may go towards making the work anonymous can be unknowingly sabotaged by the very student the teacher is trying to protect.

Instead, Chappuis suggests choosing or creating at least three samples for each trait on a rubric from other classes or previous years – a strong, mid-range, and weak example. Give these samples to the students one at a time either on the overhead or, better yet, as individual copies. Have them Think-Pair-Share. First, students individually decide how they would score the product on the rubric; students decide if the work is predominately strong or predominately weak. If strong, students start on the high end of the rubric moving down, if weak, the opposite, before settling on a score that meets most of the criteria. They then discuss with a partner, using the language in the rubric. The class then votes and gives reasons why they chose the scores they did before the teacher reveals the score they gave the example. The process is then repeated for each remaining example or until students understand the language and concepts in the rubric, whichever comes last.

This would be an excellent way to teach 6-trait writing. I attempted it last year, but stopped when I ran out of examples but before students fully understood the concept. Hopefully this year I have more samples to draw upon.

Works Cited

Chappuis, J. (2009). Seven strategies of assessment for learning. Portland, OR: Educational Testing Service.

Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15.

Saturday, July 10, 2010

Chappuis, Chapter 1

Last year I had the opportunity to attend a conference for teachers in their first three years. One of the breakout sessions focused on teaching students to self-assess and set learning goals.

The session was led by Jan Chappuis, who wrote Seven Strategies of Assessment for Learning. Self-assessment and goal setting is one of seven strategies Chappuis presents for helping students take ownership of the formative assessment process. Attending the session greatly affected how I taught during the second semester, particularly by providing students with clear learning goals in their own language and offering examples of strong and weak work, Chappuis' first two strategies. As a result, Seven Strategies of Assessment for Learning made my summer reading list.

In the first chapter, Chappuis takes time to emphasize the difference between formative and summative assessment. For her, formative assessment must involve feedback, while the product of summative assessment is a grade in the grade book. She also points out that these two type of assessment are not exclusive of each other. This is probably not news to many readers, but what seems innovative to me is that Chappuis places greater importance on student use of the formative information than the teacher's use. A teacher can provide remedial instruction, change differentiation grouping, provide individual tutoring, or any number of other adjustments in response to formative results. Yet often these changes are met with resistance from students: “we already learned this,” “why are you putting me in the 'dumb group' now?” or “why do I have to work back here with you when everyone else is reading?”  Students who are aware of their formative results and understand what the results say about their understanding of classroom material are more likely not only to readily accept reteaching and remediation, but are also more likely to meet benchmark. Two instances stand out to me from my previous semester teaching that reinforce this belief.

Tonya Arnold taught me this first trick that completely changed the way I examine test results and what information I share with my students. When I did administer a test, which was rare, the process was very summative: write test questions based on what I think students should know after a unit; review possible test questions in class; administer test; grade test; record grades; return tests and ask if students have any questions; when met with silence, begin the next unit.

Now, every summative test I give is also formative. Every test question I write is individually linked with a specific learning goal from the unit, and there are multiple questions that address each learning goal. The next change comes after I grade the tests, when I use an excel spreadsheet to tie each and every test question to it's learning goal and tally it up to give students and me an individual and class percentage of their performance on each learning goal addressed in that unit. In class, we compare this to the same learning goals students were presented on the pre-test. Bam! Student immediately see, “wow, I scored 12% on such and such learning goal before this unit, but now I'm at 86%! I'm awesome” or “geez, I got the same percentage on such and such learning goal before and after this unit while the class average increased by 60%. Maybe I should actually do all those [not-so] stupid assignments Mr. B gave us . . .”


Admittedly, we still moved on to the next unit after students saw these results, but students had an open invitation to work with me outside of class and retake the test to bring up their scores. No one took me up on it. So I need to be more proactive in getting students to remediate on their own time and provide this feedback throughout the unit. There is room to grow, but this is a great starting place.

Here's a sample excel spreadsheet in excel and open office format if you'd like to adapt it for your own students. It completely blew my mind as well as the minds of my students – hopefully it will do the same for you.

The second instance of worthwhile feedback that came to mind was our second round of state testing.  I presented students with data from their test results, which is automatically broken down by strands.  I placed the students in differentiated groups by the strand they scored the lowest on.  Each group worked on different tasks for the week - those who had low vocabulary scores received explicit instruction in context clues; students who scored poorly on literary text got to make flash cards of different literary terms.  When we took the test, all but nine students improved their overall score, and 15 more students passed the exam, meaning they have one less hurdle to jump in order to graduate.

But the real proof is in the strand scores.  Students in differentiated groups averaged a 6 to 10 point increase on their target strand, compared to a 0 to 4 point increase in non-targeted strands.  Not only were the activities helpful, but I would argue students saw how those activities would help them pass the test - the activities were directly related to areas they struggled with.  As a result, students were more invested in the process.

Next year, I want to do this more than twice.  There is an added bonus when giving students formative assessment.  Chappuis (2009) quotes Sadler (1989):
The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher, is able to monitor continuously the quality of what is being produced during the act of production itself, and has a repertoire of alternative moves or strategies from which to draw at any given point.
Students understand why the teacher is reteaching and how it will help them, and while learning are monitoring the quality of their work - whether it is a strong or weak example, demonstrates proficiency of the learning goal, and helps them make progress.  Sounds like a pretty good deal.  

Chappuis offers seven strategies to make this happen.  I mentioned these earlier this year.  As these strategies are the topic of the remainder of Chappuis's book, I'll be posting about them in greater depth in the following weeks:

Where Am I Going?
1. Clear learning targets
2. Models of strong & weak work
Where Am I Now?
3. Offer regular, descriptive feedback
4. Teach students to self-assess & set goals
How Can I Close the Gap?
5. Design lessons that focus on one learning target at a time
6. Revision is focused
7. Students track their progress and self assess

Chappuis, J.  (2009).  Seven strategies of assessment for learning.  Portland, OR: Educational Testing Service.