I often find when I need to work out what I'm going to teach in the next week1, a hike in Portland's Forest Park typically puts some grease on the cogs and get my wheels turning.
Today's is going to be a doosey. I just posted the paper I wrote this week on teaching reading from a constructivist method for my Theory of Instruction class at PSU. I'm a pretty sick2 individual to post a graduate paper on my blog, but you are sicker if you take the time to read it. Basically, here's the problem: at secondary schools, students are often forced into a one-size-fits-all remedial reading class if they don't meet benchmark. They lose an elective and are not happy about it. They take their anger out on the class and teacher by totally blowing both off. The teacher notices they have little control of the classroom and plan their lessons around a direct instruction model because it gives them more control over the class.
Constructivist lessons would be more fun for teacher and student and might do a better job of teaching the skills. Constructivist lessons increase motivation, but require a certain critical mass of motivation to get off the ground; otherwise, students take the time to "socially create meaning" about the top ten list on Z100 rather than the reading strategies the teacher is asking them to practice.
How do I get my students to that critical mass?
On top of that, I have a colleague who gets her students to work by making their lives uncomfortable and unpleasant if they deviate from the assigned activity. I used to be of the same mind until an administrator asked me to only punish students who disrupted class; student who choose not to learn have that right. But my colleagues students all have "As" and know their reading strategies. My students are receiving zeros.
Is my job to make them do the work3? In the long run, what will be the greater benefit to the student?
1. Read "day." [Go back.]
2. I mean sick in the conventional rather than the contemporary meaning. [Go back.]
3. Can I really make them do anything? It would seem my colleague does by making the alternative less desirable. [Go back.]
Sunday, December 5, 2010
December 5 Hike
Tags:
Classroom Management,
Constructivism,
Hikes,
Pedagogy,
PSU,
Reading,
Reflection,
Regular
Teaching Reading with Constructivist Methods
Instruction in current reading intervention classes at the secondary level lean significantly towards a direct instruction model. While direct instruction is well used and proven, there are other theories which are pedagogically sound and have virtues of their own. Constructivism is one such theory that is accepted widely in the educational community. The purpose of this paper is to examine the current high frequency of direct instruction in reading intervention programs and whether a constructivist model could be as successful or even more so.
To determine whether one proven method of instruction could replace another proven model may seem trivial. However, there are a number of reasons why this is not the case. First, differentiation not only encourages teaching students at different levels of ability, but also to teach them in different ways. Teaching reading both through direct instruction and constructivism would meet this requirement. Second, some researchers would place direct instruction and constructivist practice at different ends of the spectrum. In the culture of best practice, some educators and administrators may consider one theory better than another. Third, direct instruction may appear more appealing to districts and teachers for reasons completely separate from student learning or achievement. Surely, these are valid reasons to explore the possibility of teaching reading differently.
Differentiated instruction asks that teachers meet all students at their current level. Teaching the same lesson to all students in a class and asking all students to complete the same assignments will likely produce a bell curve of learning. For some students, the lesson will be at an independent level – they could complete the task without the lesson if asked. Other students will be at an instructional level – the lesson is meaningful and helpful and aids them in completing the assignment and learn new information. A final group of students will be at the frustration level – they will only be able to complete part of the assignment because they lack the prior foundational knowledge other students learned from different experiences to fully comprehend the lesson.
In differentiated instruction, students often complete similar tasks requiring differing levels of ability. Because all students learn different subjects at a different pace, in a differentiated classroom students will all learn at their own instructional level. Not only do all students learn at different rates, but in different ways. To teach an entire course using the same instructional theory will likely benefit some students while putting others at a severe disadvantage. Wormeli (2007) addresses this in his book Differentiation: “Many teacher's follow Madeline Hunter's direct instruction model. It's a logical and well loved approach that can be part of a differentiated classroom. It is ineffective, however, if it becomes the only model we use” (p. 72, author's italics). Balancing instructional methods across a curriculum will benefit all students equally. This is one reason the teaching of reading through a constructivist model is important.
While differentiating instruction is one reason for examining the issue of instructional theory in literacy education, the fact that some consider direct instruction and constructivism on opposing sides of the educational spectrum is another. Johnson (2004) attempts to find a happy medium between these two opposing views: “ideally and ultimately, the two sides create an array of instructional possibilities, a series of dynamic tensions, which result in balance, and order, and enhanced curricular alternatives” (p. 83).
Some teachers and administrators may favor direct instruction not because of a philosophical preference, but because of accessibility or control rather than student learning or achievement. The apparent abundance of packaged reading curriculum following a direct instruction model suggests this is either the best way to teach reading or the easiest way to produce and sell reading curriculum. There may also be a tendency for teachers to gravitate toward direct instruction due to the greater teacher control it offers in classrooms filled completely with at-risk students.
Language arts classes at the high school level consist primarily of literature analysis and composition. Literacy is generally seen as a skill that should have already been acquired. But in an attempt to meet increasing demands for adequate yearly progress on state test scores as required under No Child Left Behind (NCLB), many secondary schools are having to do what they have not in the past: teach reading. The general population of secondary educators are not prepared for this.
For this reason, many companies in the business of offering boxed curriculum packages are selling an array of reading programs. And according to Brooks and Brooks (1999), many districts are buying, even before NCLB was passed. “To increase the percentages of students passing state assessments – and to keep schools off the states' lists of failing schools – local district spending on student remediation, student test-taking skills, and faculty preparation for the new assessments increases” (Brooks & Brooks, 1999, p. 20). In the interests of making the transition from literature teacher to literacy teacher easy, the majority of these programs contain some scripting, lessons and units that encourage some degree of the transmission model, and formative and summative assessments that predict how the student will perform on the state test.
Yet there are questions whether packaged curricula and more spending on remediation really improve student learning. “Despite rising test scores in subsequent years, there is little or no evidence of increased student learning. A recent study by Kentucky's Office of Educational Accountability (Hambleton et al., 1995) suggests that test-score gains in that state are a function of students' increasing skills as test takers rather than evidence of increased learning” (Brooks & Brooks, 1999, p. 20). This raises the question: is direct instruction a better way to teach remediation, or is it just easier and more reproducible than constructivism or other theoretical instructional alternatives?
At the secondary level, response to intervention programs typically gravitate towards a standard protocol model where students requiring remediation outside the general education classroom are all placed in an additional course rather than receiving individualized additional instruction specific to their literacy strengths and weakness. This practice creates classrooms with generally two (stereo)types of students: those who lack the skills independent readers have because they did not learn the skills inherently and require explicit instruction; and students who either did not try on the screening exam or lack the skills because they lack motivation in traditional educational settings. This practice creates classrooms that can be much more challenging to manage behaviorally because a small handful of the students want to learn and improve their skills while the rest of the students are increasing each others' energy levels until they spiral out of control. While a constructivist model encourages more social interaction and almost equal control of the learning between the students and teacher, a direct instruction model grants the teacher greater control of the lesson. This greater control may be attractive from a classroom management standpoint, but does it create an increase in student learning?
A number of researchers argue for more teaching from a constructivist standpoint in reading intervention. One reason is social-constructivism can produce more reliable and actionable assessment results in the screening and diagnostic stages of reading intervention. Another is that educators who already lean towards constructivism of course encourage it in remediation. Even Johnson (2004), whose diplomatic close to her paper is quoted previously, seems to encourage more constructivism in the teaching of reading. Another researcher finds that social interaction promotes the construction of knowledge in her developmental readers. And finally, one researcher has found that students in a constructivist modeled classroom have greater motivation and “more fun” than the same students do in a direct instruction modeled class.
One criticism of the current institutionalized model for reading remediation is norm-based testing – that white, middle class students often outperform their peers from diverse backgrounds. Macrine and Sabbatino (2008) mitigate these concerns by proposing a dynamic assessment and remediation approach (DARA). “Dynamic assessment is a procedure that determines whether substantive changes occur in examinee behavior if feedback is provided across an array of increasingly complex or challenging tasks” (Macrine & Sabbatino, 2008, p. 61). Macrine and Sabbatino (2008) argue that this approach is fairer for minority students (and perhaps all students) because the teacher provides learner support as they build meaning from a piece of text. In this model of assessment, the teacher gets a clear view of the specific reading strategies a student possesses rather than just a black and white determination of whether they understand a text or not. Certainly the teacher has more information when a constructivist perspective is applied to the initial screening and diagnostic assessments.
Some educators believe broad-based institutional reform is required to bring about constructivist teaching. For Brooks and Brooks (1999), “serious educational reform targets cognitive changes in students' thinking. Perceived educational reform targets numerical changes in students' test scores” (p. 23). They argue that constructivist modeled classrooms increase student learning, though maybe not always test scores.
Johnson (2004), too, seems biased towards the constructivist approach. She goes so far to equate direct instruction with the transmission model of student learning where teachers are like pitchers of information, filling their cup-like students with knowledge. Few teachers today accept the transmission model. Yet Johnson (2004) finds it “ugly but effective” (Schug et al., 2001, as cited in Johnson, 2004, p. 77). Johnson (2004) attempts to find middle ground between direct instruction and constructivism. “Teachers, as well as students, are drawn to instructional approaches that focus on active student involvement and meaningful learning [like constructivism] . . . And yet, the evaluative outcome research clearly establishes the benefits of [direct instruction]” (Johnson, 2004, p. 77). Is this mix of the two the furthest a remedial teacher can get to the constructivist end of the spectrum?
Apparently not. Kaiden (1998) found that “the transformation of passive readers into active readers and learners is clearly enhanced through the dynamics of social interaction with peers” (p. 479). In her experience, students' reading was enhanced when they controlled the classroom and the direction of the discussion.
Likewise, Donalson (2008) found students' motivation was higher when taught from the constructivist theory. Yet students didn't see this as learning. “When they were constructing learning through inquiry and engagement, they viewed the activity as fun; however, they equated learning with a transmission model” (Donalson, 2008, p. 213). Hands-on activities were found to provide intrinsic motivation for reading (Guthrie et al., 2006, as cited in Donalson, 2008, p. 212) though students viewed the completion of worksheets and direct instruction as learning activities. Not only this, but Donalson (2008) found that the curriculum taught using constructivist theory “was much more aligned with the reading research in regards to the instructional recommendations for readers who struggle” (p. 206), while the purchased curriculum later used was less aligned.
This overview of a few studies is certainly not meta-analytical; in fact, it is superficial. However, some implications can be drawn. Foremost, it would appear reading intervention can be taught from a constructivist perspective. However, the hesitation of some teachers to implement this practices in intervention classrooms may be warranted. In a significant number of these studies, the participants “were motivated, strategic, knowledgeable, and social interactive” (Kaiden, 1998, p. 477). Brooks and Brooks (1999) admit that “organizing a constructivist classroom is difficult work for the teacher and requires the rigorous intellectual commitment and perseverance of students” (p. 22). Teachers who shy away from constructivist models because of lack of student motivation and increasing behavioral problems in intervention classes may need to mix transmission and constructivist models merely to maintain order, or even do away with constructivism completely. Yet some research suggests there is a critical mass in motivation; if students view the learning as fun as Donalson (2008) found, a constructivist classroom may both improve learning and manage itself.
Brooks, M. G. & Brooks, J. G. (1999). "The courage to be constructivist." Educational Leadership, 57(3), 18-24.
Donaldson, K. (2008). “Opportunities gained and lost: Perceptions and experiences of sixth grade students enrolled in a title I reading class.” (Doctoral dissertation). Retrieved from ERIC. (ED502310).
Johnson, G. (2004). “Constructivist remediation: Correction in context.” International Journal of Special Education, 19(1), 72-88.
Kaiden, E. (1998). “Engaging developmental readers in the social construction of meaning.” Journal of Adolescent and Adult Literacy, 41, 477-49.
Macrine, S. L. & Sabbatino, E. D. (2008) “Dynamic assessment and remediation approach: Using the DARA approach to assist struggling readers.” Reading and Writing Quarterly, 24(1), 52-76.
Wormeli, R. (2007). Differentiation. Portland, ME: Stenhouse Publishers.
To determine whether one proven method of instruction could replace another proven model may seem trivial. However, there are a number of reasons why this is not the case. First, differentiation not only encourages teaching students at different levels of ability, but also to teach them in different ways. Teaching reading both through direct instruction and constructivism would meet this requirement. Second, some researchers would place direct instruction and constructivist practice at different ends of the spectrum. In the culture of best practice, some educators and administrators may consider one theory better than another. Third, direct instruction may appear more appealing to districts and teachers for reasons completely separate from student learning or achievement. Surely, these are valid reasons to explore the possibility of teaching reading differently.
Differentiated instruction asks that teachers meet all students at their current level. Teaching the same lesson to all students in a class and asking all students to complete the same assignments will likely produce a bell curve of learning. For some students, the lesson will be at an independent level – they could complete the task without the lesson if asked. Other students will be at an instructional level – the lesson is meaningful and helpful and aids them in completing the assignment and learn new information. A final group of students will be at the frustration level – they will only be able to complete part of the assignment because they lack the prior foundational knowledge other students learned from different experiences to fully comprehend the lesson.
In differentiated instruction, students often complete similar tasks requiring differing levels of ability. Because all students learn different subjects at a different pace, in a differentiated classroom students will all learn at their own instructional level. Not only do all students learn at different rates, but in different ways. To teach an entire course using the same instructional theory will likely benefit some students while putting others at a severe disadvantage. Wormeli (2007) addresses this in his book Differentiation: “Many teacher's follow Madeline Hunter's direct instruction model. It's a logical and well loved approach that can be part of a differentiated classroom. It is ineffective, however, if it becomes the only model we use” (p. 72, author's italics). Balancing instructional methods across a curriculum will benefit all students equally. This is one reason the teaching of reading through a constructivist model is important.
While differentiating instruction is one reason for examining the issue of instructional theory in literacy education, the fact that some consider direct instruction and constructivism on opposing sides of the educational spectrum is another. Johnson (2004) attempts to find a happy medium between these two opposing views: “ideally and ultimately, the two sides create an array of instructional possibilities, a series of dynamic tensions, which result in balance, and order, and enhanced curricular alternatives” (p. 83).
Some teachers and administrators may favor direct instruction not because of a philosophical preference, but because of accessibility or control rather than student learning or achievement. The apparent abundance of packaged reading curriculum following a direct instruction model suggests this is either the best way to teach reading or the easiest way to produce and sell reading curriculum. There may also be a tendency for teachers to gravitate toward direct instruction due to the greater teacher control it offers in classrooms filled completely with at-risk students.
Language arts classes at the high school level consist primarily of literature analysis and composition. Literacy is generally seen as a skill that should have already been acquired. But in an attempt to meet increasing demands for adequate yearly progress on state test scores as required under No Child Left Behind (NCLB), many secondary schools are having to do what they have not in the past: teach reading. The general population of secondary educators are not prepared for this.
For this reason, many companies in the business of offering boxed curriculum packages are selling an array of reading programs. And according to Brooks and Brooks (1999), many districts are buying, even before NCLB was passed. “To increase the percentages of students passing state assessments – and to keep schools off the states' lists of failing schools – local district spending on student remediation, student test-taking skills, and faculty preparation for the new assessments increases” (Brooks & Brooks, 1999, p. 20). In the interests of making the transition from literature teacher to literacy teacher easy, the majority of these programs contain some scripting, lessons and units that encourage some degree of the transmission model, and formative and summative assessments that predict how the student will perform on the state test.
Yet there are questions whether packaged curricula and more spending on remediation really improve student learning. “Despite rising test scores in subsequent years, there is little or no evidence of increased student learning. A recent study by Kentucky's Office of Educational Accountability (Hambleton et al., 1995) suggests that test-score gains in that state are a function of students' increasing skills as test takers rather than evidence of increased learning” (Brooks & Brooks, 1999, p. 20). This raises the question: is direct instruction a better way to teach remediation, or is it just easier and more reproducible than constructivism or other theoretical instructional alternatives?
At the secondary level, response to intervention programs typically gravitate towards a standard protocol model where students requiring remediation outside the general education classroom are all placed in an additional course rather than receiving individualized additional instruction specific to their literacy strengths and weakness. This practice creates classrooms with generally two (stereo)types of students: those who lack the skills independent readers have because they did not learn the skills inherently and require explicit instruction; and students who either did not try on the screening exam or lack the skills because they lack motivation in traditional educational settings. This practice creates classrooms that can be much more challenging to manage behaviorally because a small handful of the students want to learn and improve their skills while the rest of the students are increasing each others' energy levels until they spiral out of control. While a constructivist model encourages more social interaction and almost equal control of the learning between the students and teacher, a direct instruction model grants the teacher greater control of the lesson. This greater control may be attractive from a classroom management standpoint, but does it create an increase in student learning?
A number of researchers argue for more teaching from a constructivist standpoint in reading intervention. One reason is social-constructivism can produce more reliable and actionable assessment results in the screening and diagnostic stages of reading intervention. Another is that educators who already lean towards constructivism of course encourage it in remediation. Even Johnson (2004), whose diplomatic close to her paper is quoted previously, seems to encourage more constructivism in the teaching of reading. Another researcher finds that social interaction promotes the construction of knowledge in her developmental readers. And finally, one researcher has found that students in a constructivist modeled classroom have greater motivation and “more fun” than the same students do in a direct instruction modeled class.
One criticism of the current institutionalized model for reading remediation is norm-based testing – that white, middle class students often outperform their peers from diverse backgrounds. Macrine and Sabbatino (2008) mitigate these concerns by proposing a dynamic assessment and remediation approach (DARA). “Dynamic assessment is a procedure that determines whether substantive changes occur in examinee behavior if feedback is provided across an array of increasingly complex or challenging tasks” (Macrine & Sabbatino, 2008, p. 61). Macrine and Sabbatino (2008) argue that this approach is fairer for minority students (and perhaps all students) because the teacher provides learner support as they build meaning from a piece of text. In this model of assessment, the teacher gets a clear view of the specific reading strategies a student possesses rather than just a black and white determination of whether they understand a text or not. Certainly the teacher has more information when a constructivist perspective is applied to the initial screening and diagnostic assessments.
Some educators believe broad-based institutional reform is required to bring about constructivist teaching. For Brooks and Brooks (1999), “serious educational reform targets cognitive changes in students' thinking. Perceived educational reform targets numerical changes in students' test scores” (p. 23). They argue that constructivist modeled classrooms increase student learning, though maybe not always test scores.
Johnson (2004), too, seems biased towards the constructivist approach. She goes so far to equate direct instruction with the transmission model of student learning where teachers are like pitchers of information, filling their cup-like students with knowledge. Few teachers today accept the transmission model. Yet Johnson (2004) finds it “ugly but effective” (Schug et al., 2001, as cited in Johnson, 2004, p. 77). Johnson (2004) attempts to find middle ground between direct instruction and constructivism. “Teachers, as well as students, are drawn to instructional approaches that focus on active student involvement and meaningful learning [like constructivism] . . . And yet, the evaluative outcome research clearly establishes the benefits of [direct instruction]” (Johnson, 2004, p. 77). Is this mix of the two the furthest a remedial teacher can get to the constructivist end of the spectrum?
Apparently not. Kaiden (1998) found that “the transformation of passive readers into active readers and learners is clearly enhanced through the dynamics of social interaction with peers” (p. 479). In her experience, students' reading was enhanced when they controlled the classroom and the direction of the discussion.
Likewise, Donalson (2008) found students' motivation was higher when taught from the constructivist theory. Yet students didn't see this as learning. “When they were constructing learning through inquiry and engagement, they viewed the activity as fun; however, they equated learning with a transmission model” (Donalson, 2008, p. 213). Hands-on activities were found to provide intrinsic motivation for reading (Guthrie et al., 2006, as cited in Donalson, 2008, p. 212) though students viewed the completion of worksheets and direct instruction as learning activities. Not only this, but Donalson (2008) found that the curriculum taught using constructivist theory “was much more aligned with the reading research in regards to the instructional recommendations for readers who struggle” (p. 206), while the purchased curriculum later used was less aligned.
This overview of a few studies is certainly not meta-analytical; in fact, it is superficial. However, some implications can be drawn. Foremost, it would appear reading intervention can be taught from a constructivist perspective. However, the hesitation of some teachers to implement this practices in intervention classrooms may be warranted. In a significant number of these studies, the participants “were motivated, strategic, knowledgeable, and social interactive” (Kaiden, 1998, p. 477). Brooks and Brooks (1999) admit that “organizing a constructivist classroom is difficult work for the teacher and requires the rigorous intellectual commitment and perseverance of students” (p. 22). Teachers who shy away from constructivist models because of lack of student motivation and increasing behavioral problems in intervention classes may need to mix transmission and constructivist models merely to maintain order, or even do away with constructivism completely. Yet some research suggests there is a critical mass in motivation; if students view the learning as fun as Donalson (2008) found, a constructivist classroom may both improve learning and manage itself.
References
Brooks, M. G. & Brooks, J. G. (1999). "The courage to be constructivist." Educational Leadership, 57(3), 18-24.
Donaldson, K. (2008). “Opportunities gained and lost: Perceptions and experiences of sixth grade students enrolled in a title I reading class.” (Doctoral dissertation). Retrieved from ERIC. (ED502310).
Johnson, G. (2004). “Constructivist remediation: Correction in context.” International Journal of Special Education, 19(1), 72-88.
Kaiden, E. (1998). “Engaging developmental readers in the social construction of meaning.” Journal of Adolescent and Adult Literacy, 41, 477-49.
Macrine, S. L. & Sabbatino, E. D. (2008) “Dynamic assessment and remediation approach: Using the DARA approach to assist struggling readers.” Reading and Writing Quarterly, 24(1), 52-76.
Wormeli, R. (2007). Differentiation. Portland, ME: Stenhouse Publishers.
Tags:
Pedagogy,
PSU,
Reading,
Regular,
St. Helens Middle School
Sunday, October 24, 2010
New Blogger
Take the time to check out my colleague Danielle Speiser's new blog at http://daniellespeiser.blogspot.com. Her bite sized posts are a quick read, yet full of wisdom.
Tags:
Regular,
St. Helens Middle School
Friday, October 8, 2010
Welcome OCTE Conference Colleagues . . .
. . . and future edubloggers!
Here are a few of the links I spoke about briefly:
Dan's Concept Checklist
How Dan Assesses
My Reading Skills Sheet
I also referenced a couple posts about why I blog and an example post for when I had a question rather than a great idea to share. Feel free to read these if it is helpful.
Below are links from the handout. Feel free to continue the conversation we started today in the comments, and please let me know when you start blogging - I want to steal your ideas!
Great Edubloggers to Follow
Kylene Beers, Language Arts
Dina, Language Arts
Dana Huff, Language Arts
Bud Hunt, Language Arts / Technology
Clay Burell, General Education
Dan Meyer, Math
Kate Novak, Math
Karl Fisch, Math / General Education / Technology
An added bonus for blog visitors: to the left of this post you can find my aggregator and recent posts I've read and enjoyed.
Put Together a Reading List
Step 1. Choose a reader.
Google Reader is a popular web based reader that also has some social functions built in.
FeedReader I haven't used; I actually just found it with a Google search. You download it to the computer, connect to the internet, download your feeds and then can read offline.
Firefox, the web browser, also has a feed reader built in if you're not ready to dive into a separate account or program.
Step 2. Find a blog.
Some good ones are up top. You can also go to those blogs and see who they follow (look for a sidebar titled “Aggregator” or “Who I Read”).
Step 3. Find the rss feed.
Usually marked by an orange icon that looks like the one at the top left hand corner of this page labeled "Posts."
Right click on this icon and select “Copy link address.”
Paste this into your feed reader.
RSS feeds can be a pain sometimes. With many feed readers, if you can't find the rss feed, you can just enter the blog address. If you're using Google Reader, you can also look for an "Add to Google" button.
Step 4. Read regularly and repeat.
You don't have to read everything a blogger posts. You won't be able to once you build up your reading list. Just browse for the titles and authors that catch your attention.
Starting Your Own Blog
Step 1. Choose a blogging service.
Blogger is a completely free blogging service also run by Google.
Edublogs has a free account option, or you can pay for more features.
Word Press is a free blogging platform, but you have to find hosting for it. This offers more features, but the average individual will have to pay for a web hosting service.
Live Journal hosts your blog but puts ads on your site.
Step 2. Sign-up for a blog.
Step 3. Write your first post. You could just polish up the one you wrote today.
Step 4. Post a comment about it here so others from the conference can read your posts.
Here are a few of the links I spoke about briefly:
Dan's Concept Checklist
How Dan Assesses
My Reading Skills Sheet
I also referenced a couple posts about why I blog and an example post for when I had a question rather than a great idea to share. Feel free to read these if it is helpful.
Below are links from the handout. Feel free to continue the conversation we started today in the comments, and please let me know when you start blogging - I want to steal your ideas!
Great Edubloggers to Follow
Kylene Beers, Language Arts
Dina, Language Arts
Dana Huff, Language Arts
Bud Hunt, Language Arts / Technology
Clay Burell, General Education
Dan Meyer, Math
Kate Novak, Math
Karl Fisch, Math / General Education / Technology
An added bonus for blog visitors: to the left of this post you can find my aggregator and recent posts I've read and enjoyed.
Put Together a Reading List
Step 1. Choose a reader.
Google Reader is a popular web based reader that also has some social functions built in.
FeedReader I haven't used; I actually just found it with a Google search. You download it to the computer, connect to the internet, download your feeds and then can read offline.
Firefox, the web browser, also has a feed reader built in if you're not ready to dive into a separate account or program.
Step 2. Find a blog.
Some good ones are up top. You can also go to those blogs and see who they follow (look for a sidebar titled “Aggregator” or “Who I Read”).
Step 3. Find the rss feed.
Usually marked by an orange icon that looks like the one at the top left hand corner of this page labeled "Posts."
Right click on this icon and select “Copy link address.”
Paste this into your feed reader.
RSS feeds can be a pain sometimes. With many feed readers, if you can't find the rss feed, you can just enter the blog address. If you're using Google Reader, you can also look for an "Add to Google" button.
Step 4. Read regularly and repeat.
You don't have to read everything a blogger posts. You won't be able to once you build up your reading list. Just browse for the titles and authors that catch your attention.
Starting Your Own Blog
Step 1. Choose a blogging service.
Blogger is a completely free blogging service also run by Google.
Edublogs has a free account option, or you can pay for more features.
Word Press is a free blogging platform, but you have to find hosting for it. This offers more features, but the average individual will have to pay for a web hosting service.
Live Journal hosts your blog but puts ads on your site.
Step 2. Sign-up for a blog.
Step 3. Write your first post. You could just polish up the one you wrote today.
Step 4. Post a comment about it here so others from the conference can read your posts.
Tags:
Blogging,
Ed-Tech,
OCTE,
Professional Development,
Regular
Tuesday, September 21, 2010
Setting the Tone Recap
Before school started, I was a little apprehensive about how to set the tone for the year.
After writing that post, I thought Marzano (2003) might have something in Classroom Management that Works. Turns out there's a whole chapter on "getting off to a good start."
While there hasn't been enough research on setting the tone for Marzano to do a meta analysis, the chapter did confirm the direction I was already leaning towards: thaw slowly. Hit them hard at the beginning of class and lighten up towards the end as warranted.
Turns out that's totally the way to go, at least compared to what I've done in the past1. Overall, I've had the best week of my career, including classroom management. I think it's partially this slowly thawing attitude I've adopted and partially the techniques I've taken from my summer reading of Teach Like a Champion2.
The rules I outlined in my previous post are posted prominently, but the procedures are posted in this PBIS matrix suggested at an in-service last spring.
Calendars are still a big part of classroom management, building report, and utilizing exit slips. My students are now getting 10 points a day and it's 50% of their grade.
I've been most surprised by the willingness students have to adopt some moreprescriptive, acronymic, draconian, measures3. I have a poster that shows students how their desk should look (Lemov, 2010, p 159). I frequently use SLAT, which stands for sit up straight, listen, ask and answer questions, and track the speaker (Lemov, 2010, p 158). I'm am surprised by how quickly 7th graders are willing to sit up straight when I ask them to; I was expecting a lot more push back. My 8th graders make fun of it, but do it all the same. My students seem more alert and, with tracking the speaker, more attentive of each other. If I call on a student to answer a question, they can't refuse to try, because the expectation is established that they ask and answer questions.
I more energetic and more dedicated to selling my content (Lemov, 2010, p 51), and it rubs off on the students. My calendars have a daily rating for whether students felt they learned something new and if they were bored. I haven't put the data together in a spreadsheet4, but by eyeballing it students feel they're learning more and are less bored than last year. This is partially my energy and confidence, but I think also the amount of content we hit on in a day. To paraphrase Lemov, just like it seems you're moving faster when flying in a plane close to the ground, in class there are numerous reference points students see in a day and it creates the illusion that we are moving faster (Lemov, 2010, p 226).
Works Cited
Lemov, D. (2010). Teach Like a Champion. San Francisco: Jossey-Bass.
Marzano. (2003). Classroom Management that Works. Alexandria, VA: ASCD.
1. Oooh, big surprise. Go back
2. Great New York Times magazine article on this book. Some might complain that it boils teaching down to 49 techniques, but I do think great teachers are made not born. There is some value here. Go back
3. I'm not sure what word describes SLAT and the desk poster. It feels weird to me. Go back
4. I tried this for one week last year. It takes long enough to grade the calendars each day alone - when am I going to analyze data? If I ever figure it out, it's a post in it's own with some awesome citations of K. Anders Ericsson. Go back
After writing that post, I thought Marzano (2003) might have something in Classroom Management that Works. Turns out there's a whole chapter on "getting off to a good start."
While there hasn't been enough research on setting the tone for Marzano to do a meta analysis, the chapter did confirm the direction I was already leaning towards: thaw slowly. Hit them hard at the beginning of class and lighten up towards the end as warranted.
Turns out that's totally the way to go, at least compared to what I've done in the past1. Overall, I've had the best week of my career, including classroom management. I think it's partially this slowly thawing attitude I've adopted and partially the techniques I've taken from my summer reading of Teach Like a Champion2.
The rules I outlined in my previous post are posted prominently, but the procedures are posted in this PBIS matrix suggested at an in-service last spring.
Calendars are still a big part of classroom management, building report, and utilizing exit slips. My students are now getting 10 points a day and it's 50% of their grade.
I've been most surprised by the willingness students have to adopt some more
I more energetic and more dedicated to selling my content (Lemov, 2010, p 51), and it rubs off on the students. My calendars have a daily rating for whether students felt they learned something new and if they were bored. I haven't put the data together in a spreadsheet4, but by eyeballing it students feel they're learning more and are less bored than last year. This is partially my energy and confidence, but I think also the amount of content we hit on in a day. To paraphrase Lemov, just like it seems you're moving faster when flying in a plane close to the ground, in class there are numerous reference points students see in a day and it creates the illusion that we are moving faster (Lemov, 2010, p 226).
Lemov, D. (2010). Teach Like a Champion. San Francisco: Jossey-Bass.
Marzano. (2003). Classroom Management that Works. Alexandria, VA: ASCD.
1. Oooh, big surprise. Go back
2. Great New York Times magazine article on this book. Some might complain that it boils teaching down to 49 techniques, but I do think great teachers are made not born. There is some value here. Go back
3. I'm not sure what word describes SLAT and the desk poster. It feels weird to me. Go back
4. I tried this for one week last year. It takes long enough to grade the calendars each day alone - when am I going to analyze data? If I ever figure it out, it's a post in it's own with some awesome citations of K. Anders Ericsson. Go back
Saturday, September 11, 2010
Revisiting the Pirate Lesson
So the majority of voters in the poll went for doing this lesson on Talk Like a Pirate Day. I was not planning on going in that direction. I was going to hit it when I teach character.
I would love to hear some explanations in the comments.
Tags:
Lesson Planning,
Pedagogy,
Reading,
Regular
Monday, September 6, 2010
Setting the Tone
The first day of school is tomorrow here in the Pacific Northwest. This will be my third year of teaching, albeit half-time. I'm excited to get the year started.
By the end of last year, I feel like I really had a reliable method for classroom management. Teachers implemented PBIS school-wide, I had immediately tangible consequences for good and bad behavior, and clear rules and procedures posted in the classroom. My administrator described the difference as night and day. More importantly, my students were more attentive, more productive and learning more.
To start with these tools all in place at the beginning of the year is a nice change. Every year before, I've negotiated rules and expectations with students a week or two into the year. While this hands ownership to the students, I currently lack either the presence or the experience to maintain decorum for those first weeks. Consequently, students aren't mindful of what their rules should make the classroom look like. Maybe in a few years I can move back in that direction.
I have four rules this year. Directly from my syllabus:
By the end of last year, I feel like I really had a reliable method for classroom management. Teachers implemented PBIS school-wide, I had immediately tangible consequences for good and bad behavior, and clear rules and procedures posted in the classroom. My administrator described the difference as night and day. More importantly, my students were more attentive, more productive and learning more.
To start with these tools all in place at the beginning of the year is a nice change. Every year before, I've negotiated rules and expectations with students a week or two into the year. While this hands ownership to the students, I currently lack either the presence or the experience to maintain decorum for those first weeks. Consequently, students aren't mindful of what their rules should make the classroom look like. Maybe in a few years I can move back in that direction.
I have four rules this year. Directly from my syllabus:
- Treat others as you would like to be treated. Enough said.
- Do your best. It is impossible to succeed without trying.
- Accept that you will not be treated the same as everyone else1. The teacher cannot watch everyone at the same time, and sometimes will catch one person breaking the rules and not another person. When a student is given a consequence, “so and so did it too” is not an acceptable response. Likewise, students are at all different levels and therefore some students will need harder or easier work. Students should come to class ready to learn more for themselves, not in comparison to their peers.
- Bring and use the right tool for the right job. Students can't complete class work without the proper materials. Likewise, using certain materials at the wrong time (like listening to an iPod during a discussion) impedes learning. There may be a time when students can use cell phones and iPods in class, but they should be turned off and out of sight until the teacher asks for them.
In addition, I have five sets of procedures. Again, from the syllabus:
When reading:
- Reading is thinking. It is easier to think when it is silent and there are few distractions. Do not talk when reading.
- When you do need to talk to the teacher or a partner, whisper.
- Fully focus on reading during silent reading time.
When writing:
- Writing is thinking. It is easier to think when it is silent and there are few distractions. Do not talk when writing.
- When you do need to talk to the teacher or a partner, whisper.
- Use the entire writing time working on writing.
When entering the classroom:
- Get your materials from the filing cabinet in the room.
- Be seated in your desk before the bell rings.
- When the bell rings, immediately begin the warm-up exercise.
When leaving the classroom:
- Mr. B will dismiss you, not the bell.
- Leave your workspace cleaner than you found it and return all materials to the right place.
- When your workspace is clean, sit in your chair and wait to be dismissed.
- When dismissed, push your chair in.
- Turn in calendars and any other assignments on your way out the door.
During class:
- Follow SLAT – Sit up straight, Listen, Ask and answer questions, and Track the speaker.
- If you need to use the bathroom, quietly get out one of your bathroom passes and hold it up. I will either sign it or ask you to wait until a better time.
- If you need to sharpen a pencil, hold your pencil up in the air and I will trade you for a fresh one.
Rather than have students sit listening to me drone on about all the expectations, I've taken a page from Dan's post on his first day of class and give students this:
Throw in some role play, some enthusiasm, and the first day jitters and I hope to have their attention for the period. The first week at least will be consumed by setting up and practicing these and other classroom processes.
All this still leaves me with one nagging question. Do I have enough faith in my already established [overly-bureaucratic?] classroom management that it alone can get me through the first day of school?
On the first day, apparently I've always presented myself as too nice, because every year thus far I end up backpedaling, coming down harder on students the next day or the next week because I didn't get a good enough handle on them in the first hour we met.
I know teachers who intentionally throw a student out of class on the first day just to show they mean business. That doesn't seem like the answer to me, especially since even with a beard I'm not intimidating to anyone over the age of 10. I don't want students to fear me; I want them to fear my justice. I don't want them walking into class on tiptoe, but I do want my expectations clear, my consequences just, quick, and utterly devastating.
Tomorrow, how do I set the tone?
1. All credit for this rule goes to Tom Fuller, who should probably be blogging. Go back
Tags:
Classroom Management,
Regular
Friday, September 3, 2010
Talk Like a Pirate Day
Note to readers: this post involves me thinking aloud. This has been proven in the state of California to cause dizziness and fainting. Consider yourselves warned.
So I have this great idea. On International Talk Like a Pirate Day, I'm going to dress up like a pirate and do a lesson on how dialogue can give us information about characters; we can make inferences about the character by the way they speak. Splendid idea, yes?
Problem is, my first unit isn't on character. No sir. It's on fix-up strategies. Inferences are fix-up strategies. But they are very complex fix-up strategies that should probably be taught after a student has mastered questioning, clarifying (of which inferences are a pseudo-subset), and predicting. And teaching a random lesson that has nothing to do with your unit assessment is poor backwards planning (you know, when you plan the assessment first, then the lessons - whatever that's called).
So I am left with the following options:
So I have this great idea. On International Talk Like a Pirate Day, I'm going to dress up like a pirate and do a lesson on how dialogue can give us information about characters; we can make inferences about the character by the way they speak. Splendid idea, yes?
Problem is, my first unit isn't on character. No sir. It's on fix-up strategies. Inferences are fix-up strategies. But they are very complex fix-up strategies that should probably be taught after a student has mastered questioning, clarifying (of which inferences are a pseudo-subset), and predicting. And teaching a random lesson that has nothing to do with your unit assessment is poor backwards planning (you know, when you plan the assessment first, then the lessons - whatever that's called).
So I am left with the following options:
- Dress up like a pirate on the day I teach dialogue as a way to analyze a character.
- Dress up like a pirate on the day I teach inferences.
- Dress up like a pirate on International Talk Like a Pirate Day and do it anyway; this is going to be one memorable lesson and I can refer back to it when we touch on inferences and character.
Give a boring lecture on character/inferences.
Please vote using the poll at the top right hand corner of the blog1. If you're reading this on Facebook, you'll have to go to the blog: http://pedagogypractice.blogspot.com. The poll closes in one week.
1. Yeah, I probably already know the answer. Humor me. Go back
Tuesday, August 24, 2010
Readicide Book Review
In addition to Seven Strategies of Assessment for Learning and Teach Like a Champion, the third book on my summer reading list was Readicide by Kelly Gallagher. Gallagher teaches high school English in Anaheim, California, has written a few professional development books and stars in a professional development DVD I viewed last year.
Gallagher (2009) defines readicide as "the systematic killing of the love of reading, often exacerbated by the inane, mind-numbing practices found in schools" (p 2). Gallagher's premise argues that readicide is currently running rampant and offers methods for stopping it.
Ever since I started my college courses in education, I've been concerned that I am too easily convinced by pedagogy books. Many of my classmates would find flaws with the books we were assigned while I saw them predominately as a invaluable resources. I've been worried I haven't been critically reading anything that has to do with teaching. Readicide quelled those concerns.
Readicide is a good book. It has good ideas that I will touch on later in this post. But it also has a number of either/or and slippery slope logical fallacies. Gallagher's chapters address four causes of readicide:
But Gallagher seems to suggest that these are the necessary results of NCLB when my school has not dropped novels from the curriculum. In fact, for our classes aimed at under-performing readers, we've drastically increased the number of books they have access to and the amount of freedom they have in choosing their books. My colleagues and I don't teach to the test; we teach to standards that the test assesses, and we do it with good books, not with worksheets.
After his first chapter, however, I found areas of agreement, or statements that make more sense to me. His second chapter suggests improvements to the books schools offer students and the amount of time they have to read them in school. Sustained silent reading (SSR) deserves a spot in the daily schedule. So many students don't have access to books or the time to read them at home. Gallagher's third chapter rightly warns against over or under teaching books. Many novels can be taught to the point of complete boredom when the teacher searches for thematic meaning on each page. Likewise, teachers can't just hand students a book and expect them to figure it out themselves.
Gallagher's fourth chapter was the most interesting to me. He proposes the concept of a "reading flow," citing Csikszentmihalyi (1990): "the state in which people are so involved in an activity that nothing else seems to matter; the experience itself is so enjoyable that people will do it even at great cost, for the sheer sake of doing it" (p 4 as cited in Gallagher, 2009 p 61). Gallagher argues that asking students to make sticky notes or do a double entry diary while reading interrupts the reading flow.
I'm not sure where to stand on this. On one hand, I chose to use sticky notes while reading this book - in part to help me when writing this post and in part to help me think deeper about what I was reading. The amount of rereading and searching for quotations was greatly reduced because of it. On the other hand, I had a number of students last semester who were so engrossed in their books - in the reading flow - that no amount of cajoling2 from me could get them to do a reading response journal, earning them a D in the class. But I know they actually read, understood, and enjoyed the book. One student told me it was the first book he'd read since fifth grade. Isn't that more important than the number of sticky notes one writes over the course of 20 pages? But if it's an intervention reading class, isn't part of it learning and demonstrating the skills of thinking (writing) aloud (on a sticky note)?
Gallagher (2009) instead asks students to do a close reading of one page of text from their books, annotating it (p 101-2). They read the assigned text and the next day have a copy of the page waiting for them. I think I may drop the number of sticky notes I require, telling students "in this book I want you to have a sticky note with one good question, one good prediction, and one good inference, the qualities of which they'll know from a rubric. If they're meeting in a book club, they'll need to fill out a separate role sheet. Then I'll give them a page to annotate so I can get a sense of their skills over a whole book (where you're not necessarily going to make a prediction every single page) and a snapshot (you can interact with one page of text with more than one or two talkbacks).
Overall, though I think he was a little melodramatic in regards to NCLB and I don't feel that my colleagues and I drastically overteach or underteach our material, I'm glad I read Readicide. It challenged enough of my current ideas to send me in new directions, and that's what summer reading is for.
Gallagher (2009) defines readicide as "the systematic killing of the love of reading, often exacerbated by the inane, mind-numbing practices found in schools" (p 2). Gallagher's premise argues that readicide is currently running rampant and offers methods for stopping it.
Ever since I started my college courses in education, I've been concerned that I am too easily convinced by pedagogy books. Many of my classmates would find flaws with the books we were assigned while I saw them predominately as a invaluable resources. I've been worried I haven't been critically reading anything that has to do with teaching. Readicide quelled those concerns.
Readicide is a good book. It has good ideas that I will touch on later in this post. But it also has a number of either/or and slippery slope logical fallacies. Gallagher's chapters address four causes of readicide:
- No Child Left Behind
- Not providing students with good books and time to read them in school
- Over-teaching or under-teaching novels
- Using too many reading tools (sticky notes, double entry diaries)
In his first chapter, Gallagher suggests that because of NCLB, teachers must choose to either teach meaningful curriculum, or teach to the test. In a bulleted list of the cycle started by NCLB, Gallagher (2009) states:
To be fair, my school's test scores are alright. I don't teach is a major suburb district, but in a rural area from which many workers commute to the city; still, over 80% of my students are White. Gallagher teaches in the Unified Los Angeles School District and the majority of his students are Black and Latino. Placed in context, these bullet points are used to also point out that minority schools are often the ones preforming poorly on state tests, are poorly funded to begin with, and are threatened with budget cuts if they don't improve their scores1.
- Because the 'worth' of teachers and administrators is largely perceived by how well students do on these shallow exams, educators narrow the curriculum in an all-out attempt to raise reading scores.
- Workbooks replace novels. Reading becomes another worksheet activity. Students are taught that the reason they should become readers is to pass a test.
- Reluctant readers drown in test preparation, ensuring any chance they may have had of developing a lifelong reading habit is lost. (p 17).
But Gallagher seems to suggest that these are the necessary results of NCLB when my school has not dropped novels from the curriculum. In fact, for our classes aimed at under-performing readers, we've drastically increased the number of books they have access to and the amount of freedom they have in choosing their books. My colleagues and I don't teach to the test; we teach to standards that the test assesses, and we do it with good books, not with worksheets.
After his first chapter, however, I found areas of agreement, or statements that make more sense to me. His second chapter suggests improvements to the books schools offer students and the amount of time they have to read them in school. Sustained silent reading (SSR) deserves a spot in the daily schedule. So many students don't have access to books or the time to read them at home. Gallagher's third chapter rightly warns against over or under teaching books. Many novels can be taught to the point of complete boredom when the teacher searches for thematic meaning on each page. Likewise, teachers can't just hand students a book and expect them to figure it out themselves.
Gallagher's fourth chapter was the most interesting to me. He proposes the concept of a "reading flow," citing Csikszentmihalyi (1990): "the state in which people are so involved in an activity that nothing else seems to matter; the experience itself is so enjoyable that people will do it even at great cost, for the sheer sake of doing it" (p 4 as cited in Gallagher, 2009 p 61). Gallagher argues that asking students to make sticky notes or do a double entry diary while reading interrupts the reading flow.
I'm not sure where to stand on this. On one hand, I chose to use sticky notes while reading this book - in part to help me when writing this post and in part to help me think deeper about what I was reading. The amount of rereading and searching for quotations was greatly reduced because of it. On the other hand, I had a number of students last semester who were so engrossed in their books - in the reading flow - that no amount of cajoling2 from me could get them to do a reading response journal, earning them a D in the class. But I know they actually read, understood, and enjoyed the book. One student told me it was the first book he'd read since fifth grade. Isn't that more important than the number of sticky notes one writes over the course of 20 pages? But if it's an intervention reading class, isn't part of it learning and demonstrating the skills of thinking (writing) aloud (on a sticky note)?
Gallagher (2009) instead asks students to do a close reading of one page of text from their books, annotating it (p 101-2). They read the assigned text and the next day have a copy of the page waiting for them. I think I may drop the number of sticky notes I require, telling students "in this book I want you to have a sticky note with one good question, one good prediction, and one good inference, the qualities of which they'll know from a rubric. If they're meeting in a book club, they'll need to fill out a separate role sheet. Then I'll give them a page to annotate so I can get a sense of their skills over a whole book (where you're not necessarily going to make a prediction every single page) and a snapshot (you can interact with one page of text with more than one or two talkbacks).
Overall, though I think he was a little melodramatic in regards to NCLB and I don't feel that my colleagues and I drastically overteach or underteach our material, I'm glad I read Readicide. It challenged enough of my current ideas to send me in new directions, and that's what summer reading is for.
Works Cited
Gallagher, Kelly. 2009. Readicide: How schools are killing reading and what you can do about it. Portland, ME: Stenhouse Publishers.
1. Which is probably, in my opinion, the worst part of NCLB: let's take underfunded schools that are struggling to begin with and threaten to take away more money. That's not counterintuitive, is it? If only sarcasm translated more easily in writing . . . Go back
Tags:
NCLB,
Pedagogy,
Reading,
Regular,
Student Teaching,
Summer Reading
Thursday, August 19, 2010
Chappuis, Chapter 3: Effective Feedback
In Chapter 2, Chappuis writes on how to help students get a clear picture of what they should be learning and how that learning should be demonstrated. In Chapter 3, Chappuis offers five characteristics of effective feedback to keep students on the right path .
1. Effective feedback directs attention to the intended learning
There are two types of feedback for Chappuis: success and intervention. These are straightforward. Success feedback points out what a student has done well while intervention feedback tells them what needs to be corrected or improved.
There are common comments that do not accomplish the goal of effective feedback however. Writing "incomplete" or "re-do" or really any grade does not encourage a student to succeed or give them a clear direction to go in. As Chappuis (2009) states, "Providing feedback can be a labor-intensive proposition. If we put all that time in we want to make sure that (1) we're doing it right, and (2) students will use it." How many more students would have corrected an assignment if I had given them one-on-one feedback instead of writing "incomplete" at the top of their paper? Even "see me," I would argue, has such a negative connotation, it is ineffective. How many of your students actually do see you afterwards?
Often it is easy to praise students. "You're so smart!" However, Chappuis notes, this puts the feedback towards the learner rather than towards the work, and what does that make the student who receives the intervention feedback? A study found that "look how hard you tried" is a much more effective comment because students see effort as something within their control, while "look how smart you are" is seen as a personal attribute and unchangeable (Blackwell, Trzensniewski, and Dweck, 2007, as cited in Chappuis, 2009).
Giving grades as part of feedback can be damaging as well. Louann Reed taught at CSU that there are three terms used for assessment that shouldn't be interchangeable. There's "assessment," the broad term; "feeback," which is written or verbal suggestions given to a student to help them improve their work or let them know their on the right track; then there are "grades," which contain a value judgement of the work. The reason many students probably go to the last page of their essays for the grade and ignore the comments is because they see the feedback as unhelpful - why take the extra effort to make revisions when the grade's already been given. Even if we say "you can make edits and turn this in for a better grade," how often does that actually happen? Better to give the feedback, have them make changes, then assign a grade.
2. Feedback occurs during learning
Chappuis points out that we teachers often tell students "it's okay to make mistakes," because that's often how we learn. By making a mistake, we know what not to do. But sometimes our grading policies encourage the opposite viewpoint. When quizzes and participation points are given leading up to the big test, how fair is it to grade them both. It's double jeopardy.
Louann's continuum also should go in order. Feedback should always come before grading. Feedback should happen during learning.
3. Feedback addresses partial understanding
Feedback should be given when it can help a student move forward. To give a student who needs further instruction feedback would be more frustrating to both student and teacher than helpful. Only when there is partial understanding is feedback helpful.
4. Effective feedback does not do the thinking for the student
The best example I have for this is correcting conventions in an essay. If one has taught sentence fragments, comma splices, and the use of semi-colons, rather than correcting each fragment in completed essays, mark the line where the error occurs and have the student correct it. Students have a task and incentive to use the teacher's comments/feedback to move their learning forward.
5. Effective feedback limits corrective to what students can act on
Differentiate feedback to what an individual student can handle at one point in time.
Methods to offer feedback
Chappuis (2000) suggests using picture and symbol clues (p. 75) like stars and stairs to make comments on student work. Students could also offer their own stars and stairs - what they think they do well and what they need to improve - before the teacher adds their comments. Two-color highlighting (p. 82) is another way for students to self assess. They highlight the sections of the rubric they believe they meet before the teacher, using a different color highlights their own assessment. By using two primary colors, this could easily identify for the student where they and the teacher are in agreement.
1. Effective feedback directs attention to the intended learning
There are two types of feedback for Chappuis: success and intervention. These are straightforward. Success feedback points out what a student has done well while intervention feedback tells them what needs to be corrected or improved.
There are common comments that do not accomplish the goal of effective feedback however. Writing "incomplete" or "re-do" or really any grade does not encourage a student to succeed or give them a clear direction to go in. As Chappuis (2009) states, "Providing feedback can be a labor-intensive proposition. If we put all that time in we want to make sure that (1) we're doing it right, and (2) students will use it." How many more students would have corrected an assignment if I had given them one-on-one feedback instead of writing "incomplete" at the top of their paper? Even "see me," I would argue, has such a negative connotation, it is ineffective. How many of your students actually do see you afterwards?
Often it is easy to praise students. "You're so smart!" However, Chappuis notes, this puts the feedback towards the learner rather than towards the work, and what does that make the student who receives the intervention feedback? A study found that "look how hard you tried" is a much more effective comment because students see effort as something within their control, while "look how smart you are" is seen as a personal attribute and unchangeable (Blackwell, Trzensniewski, and Dweck, 2007, as cited in Chappuis, 2009).
Giving grades as part of feedback can be damaging as well. Louann Reed taught at CSU that there are three terms used for assessment that shouldn't be interchangeable. There's "assessment," the broad term; "feeback," which is written or verbal suggestions given to a student to help them improve their work or let them know their on the right track; then there are "grades," which contain a value judgement of the work. The reason many students probably go to the last page of their essays for the grade and ignore the comments is because they see the feedback as unhelpful - why take the extra effort to make revisions when the grade's already been given. Even if we say "you can make edits and turn this in for a better grade," how often does that actually happen? Better to give the feedback, have them make changes, then assign a grade.
2. Feedback occurs during learning
Chappuis points out that we teachers often tell students "it's okay to make mistakes," because that's often how we learn. By making a mistake, we know what not to do. But sometimes our grading policies encourage the opposite viewpoint. When quizzes and participation points are given leading up to the big test, how fair is it to grade them both. It's double jeopardy.
Louann's continuum also should go in order. Feedback should always come before grading. Feedback should happen during learning.
3. Feedback addresses partial understanding
Feedback should be given when it can help a student move forward. To give a student who needs further instruction feedback would be more frustrating to both student and teacher than helpful. Only when there is partial understanding is feedback helpful.
4. Effective feedback does not do the thinking for the student
The best example I have for this is correcting conventions in an essay. If one has taught sentence fragments, comma splices, and the use of semi-colons, rather than correcting each fragment in completed essays, mark the line where the error occurs and have the student correct it. Students have a task and incentive to use the teacher's comments/feedback to move their learning forward.
5. Effective feedback limits corrective to what students can act on
Differentiate feedback to what an individual student can handle at one point in time.
Methods to offer feedback
Chappuis (2000) suggests using picture and symbol clues (p. 75) like stars and stairs to make comments on student work. Students could also offer their own stars and stairs - what they think they do well and what they need to improve - before the teacher adds their comments. Two-color highlighting (p. 82) is another way for students to self assess. They highlight the sections of the rubric they believe they meet before the teacher, using a different color highlights their own assessment. By using two primary colors, this could easily identify for the student where they and the teacher are in agreement.
Tuesday, July 27, 2010
Learning Students' Names
I've said it before: the two most important words to anyone are their first and last names.
I've always struggled some with learning and remembering names. Although I've definitely improved this skill as a teacher, I try to start off each year with flash cards to ensure I fully connect student faces with names.
My first two years, I did this by taking pictures of students one by one as they filled out their "getting to know you" sheets on the first day. I always gave students the option of opting out, but at my first real teaching job, the freshmen thought it was creepy I wanted to take their pictures. I ended up with less than half my students on flash cards and spent my first month getting names down.
So the next year I devised an even better strategy. It has more set up, but in less than one week I can learn every student's first and last names, and I know them before the first day of school. Students know on the first day that I care enough to learn their names, and that if they try to pull anything on that first day I can call them out on it1.
Here's how you go about this if you have eSIS. Open up your eSIS and make sure your classes have students in them. Then click on the blue arrow at the bottom right hand corner of the screen named "Navigate."
This will bring up a small window. Go to the "reports tab and double-click on "Demographics,"
I've always struggled some with learning and remembering names. Although I've definitely improved this skill as a teacher, I try to start off each year with flash cards to ensure I fully connect student faces with names.
My first two years, I did this by taking pictures of students one by one as they filled out their "getting to know you" sheets on the first day. I always gave students the option of opting out, but at my first real teaching job, the freshmen thought it was creepy I wanted to take their pictures. I ended up with less than half my students on flash cards and spent my first month getting names down.
So the next year I devised an even better strategy. It has more set up, but in less than one week I can learn every student's first and last names, and I know them before the first day of school. Students know on the first day that I care enough to learn their names, and that if they try to pull anything on that first day I can call them out on it1.
Here's how you go about this if you have eSIS. Open up your eSIS and make sure your classes have students in them. Then click on the blue arrow at the bottom right hand corner of the screen named "Navigate."
This will bring up a small window. Go to the "reports tab and double-click on "Demographics,"
then double-click on "Photo Report."
This will open a new window with a number of options. To get a report with just the students in your class, click on "Specify Extract Criteria."This will pull up another window. Under the "Courses" tab, put in your name under "Teacher," then click "Run Extreact.
Boom, you have a report with students names and last year's school photos. Some programs that read .pdf files will let you select the pictures and save them. Acrobat and Foxit do not, so I use a free screen shot program called Screen Hunter 5. I select the pictures and save individual image files of each student with their name as the file name. Of course, it's important to copy the names correctly, or else you'll end up calling Camren, your student, Carmen for the first three days of school.
I then use a program called Virtual Flashcards to create cards with the pictures and names, so I actually have to type out the complete name when I study them. It also displays statistics on your study history after each set and can determine which cards you need to study more. The company that created the program is no longer supporting it, so you can't upgrade from the free version, which is unfortunate because while you can make as many flashcard sets as you like, they are limited to 20 cards and there's a nag screen that comes up while you're studying.
Last year, I studied sporadically through Labor Day weekend (maybe 2 hours a day) and was good to go for the first day of school. If you're crazy enough to try it too, hopefully this post will help.
1. I'm not sure which would be creepier to the freshmen: that I take their pictures, or that I know their names before even meeting them. Hopefully the second one. Go back
Tags:
Classroom Community,
Regular
Monday, July 19, 2010
Chappuis, Chapter 2: Where Are We Going?
In Chapter 1 of Seven Strategies of Assessment for Learning, Chappuis presents readers with her seven strategies:
Provide students with a clear vision of the learning target
For the first strategy, Chappuis (2009) gives three steps (p. 22).
1. Share the learning target with students
Many teachers I know, including myself, post learning targets for their students on a whiteboard/blackboard or towards the beginning of class with a digital projector. Yet often we make the mistake (myself included) of using eduspeak:
2. Use language students understand
Instead, it would be much better to say:
3. Introduce students to the language and concepts of the rubrics you use
Chappuis encourages the use of rubrics for all the learning goals. Taken at face value, this could seem painstaking at best and insurmountable at worst. However, Chappuis stresses that the rubric should be both general and descriptive; they should be applicable to different assignments and use "language that explains characteristics of . . . performance at increasing levels of quality" (p. 40).
In the past few years in the blogosphere, at least, rubics have gotten a bad rap. I have at least a couple unpublished drafts that defend the rubic, but I'm forced to make their case here. Good.
The firestorm started shortly after Alfie Kohn published "The Trouble with Rubrics" in the March 2006 edition of English Journal. Kohn correctly admonishes teachers who use rubrics solely because they "make assessing student work quick and efficient, and they help teachers to justify to parents and others the grades that they assign to students" (Andrade 2000 as cited in Kohn 2006). There's plenty wrong with that statement that one can read about in Kohn's article.
He continues by stating that rubrics should only make up one piece of assessment because they too exactly spell out just what students need to do in order to squeak by, stifling the pursuit of learning. This, Kohn (2006) argues, is why "all bets are off if students are given the rubrics and asked to navigate by them" (p. 13), author's italics. I agree that this is a common symptom of students in our schools, but I doubt rubrics are the sole culprit. Any form of grading system that tells students they need to receive so many points in order to receive n grade is asking for trouble. "Smart" students care about grades. The "dumb" ones are smart enough to know that grades don't really matter - it's whether what they're learning is relevant enough and they are proficient enough to survive in society.
Rubrics break down complex processes that take years to master into bite size chunks that help students understand how they can improve. Grading writing (or anything) holistically is acceptable only summatively, and for me that means at the end of the year.
The best, most understandable means of formative assessment for students is that presented in rubric form. As Kohn (2006) says, "it matters whether the objective is to (1) rank kids against one another, (2) provide an extrinsic inducement for them to try harder, or (3) offer feedback that will help them become more adept at, and excited about, what they’re doing" (p. 14).
It would be interesting to get Kohn and Chappuis in a room together; Kohn (2006) writes "any form of assessment that encourages students to keep asking, 'How am I doing?' is likely to change how they look at themselves and at what they’re learning, usually for the worse" (p. 14), my italics. I'll reserve final judgment until I finish Seven Strategies, but in my mind Chappuis is winning.
Her rubrics seem to deal with many of the causes of Kohn's concern. The rubric should avoid evaluative ("excellent organization of thoughts") or quantitative ("thoughts are organized into three paragraphs") language and focus on descriptive ("the writing is organized using any of the various conventions of language [punctuation, sentence fluency, paragraph breaks, headings] to make thoughts easily understandable to the reader") and diagnostic language. Evaluative and quantitative language focus on the work, while descriptive language focuses on the learning.
Use examples of strong and weak work
The creation of rubrics moves fluidly into Chappuis' second strategy.
- Provide students with a clear and understandable vision of the learning target.
- Use examples and models of strong and weak work.
- Offer regular, descriptive feedback.
- Teach students to self-assess and set goals.
- Design lessons to focus on one learning target or aspect of quality at a time.
- Teach students focused revision.
- Engage students in self-reflection, and let them keep track of and share their learning. (Chappuis, 2009, p. 12)
Provide students with a clear vision of the learning target
For the first strategy, Chappuis (2009) gives three steps (p. 22).
1. Share the learning target with students
Many teachers I know, including myself, post learning targets for their students on a whiteboard/blackboard or towards the beginning of class with a digital projector. Yet often we make the mistake (myself included) of using eduspeak:
Today's Learning Goals:As soon as the student hits EL.08.RE.05, their attention turns elsewhere. Not to mention the whole "match reading to purpose" bit; I didn't learn what that meant until college, despite the fact I'd been doing it since I started reading.
- EL.08.RE.05 Match reading to purpose--location of information, full comprehension, and personal enjoyment.
2. Use language students understand
Instead, it would be much better to say:
Today we will learn:Chappuis has some pointers when rephrasing a learning target. She suggests stating the goal within the sentence structure "I am learning to _____________" or "we are learning to ___________." She also states that teachers should identify words or phrases that need clarification for their age group and use a dictionary definition to help decide the best way to simplify them.
- How to determine a goal or purpose for reading a text: looking for information, understanding a point of view, or for entertainment.
3. Introduce students to the language and concepts of the rubrics you use
Chappuis encourages the use of rubrics for all the learning goals. Taken at face value, this could seem painstaking at best and insurmountable at worst. However, Chappuis stresses that the rubric should be both general and descriptive; they should be applicable to different assignments and use "language that explains characteristics of . . . performance at increasing levels of quality" (p. 40).
In the past few years in the blogosphere, at least, rubics have gotten a bad rap. I have at least a couple unpublished drafts that defend the rubic, but I'm forced to make their case here. Good.
The firestorm started shortly after Alfie Kohn published "The Trouble with Rubrics" in the March 2006 edition of English Journal. Kohn correctly admonishes teachers who use rubrics solely because they "make assessing student work quick and efficient, and they help teachers to justify to parents and others the grades that they assign to students" (Andrade 2000 as cited in Kohn 2006). There's plenty wrong with that statement that one can read about in Kohn's article.
He continues by stating that rubrics should only make up one piece of assessment because they too exactly spell out just what students need to do in order to squeak by, stifling the pursuit of learning. This, Kohn (2006) argues, is why "all bets are off if students are given the rubrics and asked to navigate by them" (p. 13), author's italics. I agree that this is a common symptom of students in our schools, but I doubt rubrics are the sole culprit. Any form of grading system that tells students they need to receive so many points in order to receive n grade is asking for trouble. "Smart" students care about grades. The "dumb" ones are smart enough to know that grades don't really matter - it's whether what they're learning is relevant enough and they are proficient enough to survive in society.
Rubrics break down complex processes that take years to master into bite size chunks that help students understand how they can improve. Grading writing (or anything) holistically is acceptable only summatively, and for me that means at the end of the year.
The best, most understandable means of formative assessment for students is that presented in rubric form. As Kohn (2006) says, "it matters whether the objective is to (1) rank kids against one another, (2) provide an extrinsic inducement for them to try harder, or (3) offer feedback that will help them become more adept at, and excited about, what they’re doing" (p. 14).
It would be interesting to get Kohn and Chappuis in a room together; Kohn (2006) writes "any form of assessment that encourages students to keep asking, 'How am I doing?' is likely to change how they look at themselves and at what they’re learning, usually for the worse" (p. 14), my italics. I'll reserve final judgment until I finish Seven Strategies, but in my mind Chappuis is winning.
Her rubrics seem to deal with many of the causes of Kohn's concern. The rubric should avoid evaluative ("excellent organization of thoughts") or quantitative ("thoughts are organized into three paragraphs") language and focus on descriptive ("the writing is organized using any of the various conventions of language [punctuation, sentence fluency, paragraph breaks, headings] to make thoughts easily understandable to the reader") and diagnostic language. Evaluative and quantitative language focus on the work, while descriptive language focuses on the learning.
Use examples of strong and weak work
The creation of rubrics moves fluidly into Chappuis' second strategy.
Showing students examples of strong and weak work allow them to compare their own work to a final product. In addition, Chappuis suggests having students choose weak and strong examples from a number of examples and state why one is strong or weak, either with a set rubric or as a precursor to creating a rubric together as a class.
Chappuis also emphasizes how important it is that the work shown to students is not from their class. In the past I have shown work from students in the classroom, thinking they would not reveal their identity. Unfortunately, this was not always the case. More than once, a student's first response to seeing their work on the screen is “hey, that's mine!” Whatever lengths the teacher may go towards making the work anonymous can be unknowingly sabotaged by the very student the teacher is trying to protect.
Instead, Chappuis suggests choosing or creating at least three samples for each trait on a rubric from other classes or previous years – a strong, mid-range, and weak example. Give these samples to the students one at a time either on the overhead or, better yet, as individual copies. Have them Think-Pair-Share. First, students individually decide how they would score the product on the rubric; students decide if the work is predominately strong or predominately weak. If strong, students start on the high end of the rubric moving down, if weak, the opposite, before settling on a score that meets most of the criteria. They then discuss with a partner, using the language in the rubric. The class then votes and gives reasons why they chose the scores they did before the teacher reveals the score they gave the example. The process is then repeated for each remaining example or until students understand the language and concepts in the rubric, whichever comes last.
This would be an excellent way to teach 6-trait writing. I attempted it last year, but stopped when I ran out of examples but before students fully understood the concept. Hopefully this year I have more samples to draw upon.
Works Cited
Chappuis, J. (2009). Seven strategies of assessment for learning. Portland, OR: Educational Testing Service.
Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15.
This would be an excellent way to teach 6-trait writing. I attempted it last year, but stopped when I ran out of examples but before students fully understood the concept. Hopefully this year I have more samples to draw upon.
Works Cited
Chappuis, J. (2009). Seven strategies of assessment for learning. Portland, OR: Educational Testing Service.
Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15.
Saturday, July 10, 2010
Chappuis, Chapter 1
Last year I had the opportunity to attend a conference for teachers in their first three years. One of the breakout sessions focused on teaching students to self-assess and set learning goals.
The session was led by Jan Chappuis, who wrote Seven Strategies of Assessment for Learning. Self-assessment and goal setting is one of seven strategies Chappuis presents for helping students take ownership of the formative assessment process. Attending the session greatly affected how I taught during the second semester, particularly by providing students with clear learning goals in their own language and offering examples of strong and weak work, Chappuis' first two strategies. As a result, Seven Strategies of Assessment for Learning made my summer reading list.
In the first chapter, Chappuis takes time to emphasize the difference between formative and summative assessment. For her, formative assessment must involve feedback, while the product of summative assessment is a grade in the grade book. She also points out that these two type of assessment are not exclusive of each other. This is probably not news to many readers, but what seems innovative to me is that Chappuis places greater importance on student use of the formative information than the teacher's use. A teacher can provide remedial instruction, change differentiation grouping, provide individual tutoring, or any number of other adjustments in response to formative results. Yet often these changes are met with resistance from students: “we already learned this,” “why are you putting me in the 'dumb group' now?” or “why do I have to work back here with you when everyone else is reading?” Students who are aware of their formative results and understand what the results say about their understanding of classroom material are more likely not only to readily accept reteaching and remediation, but are also more likely to meet benchmark. Two instances stand out to me from my previous semester teaching that reinforce this belief.
Tonya Arnold taught me this first trick that completely changed the way I examine test results and what information I share with my students. When I did administer a test, which was rare, the process was very summative: write test questions based on what I think students should know after a unit; review possible test questions in class; administer test; grade test; record grades; return tests and ask if students have any questions; when met with silence, begin the next unit.
Now, every summative test I give is also formative. Every test question I write is individually linked with a specific learning goal from the unit, and there are multiple questions that address each learning goal. The next change comes after I grade the tests, when I use an excel spreadsheet to tie each and every test question to it's learning goal and tally it up to give students and me an individual and class percentage of their performance on each learning goal addressed in that unit. In class, we compare this to the same learning goals students were presented on the pre-test. Bam! Student immediately see, “wow, I scored 12% on such and such learning goal before this unit, but now I'm at 86%! I'm awesome” or “geez, I got the same percentage on such and such learning goal before and after this unit while the class average increased by 60%. Maybe I should actually do all those [not-so] stupid assignments Mr. B gave us . . .”
Admittedly, we still moved on to the next unit after students saw these results, but students had an open invitation to work with me outside of class and retake the test to bring up their scores. No one took me up on it. So I need to be more proactive in getting students to remediate on their own time and provide this feedback throughout the unit. There is room to grow, but this is a great starting place.
Here's a sample excel spreadsheet in excel and open office format if you'd like to adapt it for your own students. It completely blew my mind as well as the minds of my students – hopefully it will do the same for you.
The second instance of worthwhile feedback that came to mind was our second round of state testing. I presented students with data from their test results, which is automatically broken down by strands. I placed the students in differentiated groups by the strand they scored the lowest on. Each group worked on different tasks for the week - those who had low vocabulary scores received explicit instruction in context clues; students who scored poorly on literary text got to make flash cards of different literary terms. When we took the test, all but nine students improved their overall score, and 15 more students passed the exam, meaning they have one less hurdle to jump in order to graduate.
But the real proof is in the strand scores. Students in differentiated groups averaged a 6 to 10 point increase on their target strand, compared to a 0 to 4 point increase in non-targeted strands. Not only were the activities helpful, but I would argue students saw how those activities would help them pass the test - the activities were directly related to areas they struggled with. As a result, students were more invested in the process.
Next year, I want to do this more than twice. There is an added bonus when giving students formative assessment. Chappuis (2009) quotes Sadler (1989):
The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher, is able to monitor continuously the quality of what is being produced during the act of production itself, and has a repertoire of alternative moves or strategies from which to draw at any given point.
Students understand why the teacher is reteaching and how it will help them, and while learning are monitoring the quality of their work - whether it is a strong or weak example, demonstrates proficiency of the learning goal, and helps them make progress. Sounds like a pretty good deal.
Chappuis offers seven strategies to make this happen. I mentioned these earlier this year. As these strategies are the topic of the remainder of Chappuis's book, I'll be posting about them in greater depth in the following weeks:
Where Am I Going?
1. Clear learning targets
2. Models of strong & weak work
Where Am I Now?
3. Offer regular, descriptive feedback
4. Teach students to self-assess & set goals
How Can I Close the Gap?
5. Design lessons that focus on one learning target at a time
6. Revision is focused
7. Students track their progress and self assess
Chappuis, J. (2009). Seven strategies of assessment for learning. Portland, OR: Educational Testing Service.
1. Clear learning targets
2. Models of strong & weak work
Where Am I Now?
3. Offer regular, descriptive feedback
4. Teach students to self-assess & set goals
How Can I Close the Gap?
5. Design lessons that focus on one learning target at a time
6. Revision is focused
7. Students track their progress and self assess
Chappuis, J. (2009). Seven strategies of assessment for learning. Portland, OR: Educational Testing Service.
Wednesday, May 5, 2010
Enthusiasm
My 9th grade students just finished taking the 10th grade Oregon Assessment of Knowledge and Skills1 (OAKS) test. There are a few students still finishing up, but these young scholars kicked the test's posterior clear to the moon. On average, students increased their score by 3 points, but what is really impressive is that 13 of these 54 students passed the 10th grade test and now have one less hurdle to jump before they can graduate high school. I am so proud of them and their hard work.
But tonight I'm wondering how much of a role enthusiasm/motivation/confidence played a part. Is there a sort of placebo effect that helped out my students?
I have this thing that I do before I take students to the computer lab to take the test each day. I don't think it's something I do that affects their performance, necessarily, though the intent is to get them pumped up. But it is definitely a gauge for their level of this-class/test-is-such-a-joke-and-I-don't-care-what-I-get-on-it factor.
But I have three periods. And second period doesn't get so fired up. They don't seem so ready to go. I yell; they mumble. I yell louder; they tell me how "gay" this is. I ask them if they really used "gay" as synonymous with stupid; they say "sorry, sorry, we mean 'dumb.'" Fantastic.
But second period has the lowest average test score for all three periods. Only three students passed in that class, while first period, which most definitely gets fired up and ready to go, boasts eight students who passed the test and an average test score 5 points higher.
Both classes have their negative Nellys, but in first period they aren't negative about getting fired up. Second period has more students who just guessed and blew off the test, but that could be a result of failing to get fired up. I'm just trying to figure out why there's such a big difference when both classes received the same instruction. And if it's something as simple as confidence, how do I give it to all my students, who happen to be sarcastic teenagers (and just so we're clear, I teach them because of it, just so you read my intonation there correctly that it is a term of endearment).
1. As if a test can assess something as broad as knowledge . . . (back)
2. Yes, I did steal that from the 2008 Obama campaign. Not because Obama's awesome, just because it's a really great cheer. I am being completely politically neutral. (back)
But tonight I'm wondering how much of a role enthusiasm/motivation/confidence played a part. Is there a sort of placebo effect that helped out my students?
I have this thing that I do before I take students to the computer lab to take the test each day. I don't think it's something I do that affects their performance, necessarily, though the intent is to get them pumped up. But it is definitely a gauge for their level of this-class/test-is-such-a-joke-and-I-don't-care-what-I-get-on-it factor.
"I say 'fired up,' you say 'ready to go!' FIRED UP?"So we yell that, disrupt the classes on either side of us as much as possible (sorry, Dusty), and then we go kill some tests. With a vengeance. Just like John McClain.
"READY TO GO!"
"FIRED UP?"
"READY TO GO!"
"FIRED UP?"
"READY TO GO!"2
But I have three periods. And second period doesn't get so fired up. They don't seem so ready to go. I yell; they mumble. I yell louder; they tell me how "gay" this is. I ask them if they really used "gay" as synonymous with stupid; they say "sorry, sorry, we mean 'dumb.'" Fantastic.
But second period has the lowest average test score for all three periods. Only three students passed in that class, while first period, which most definitely gets fired up and ready to go, boasts eight students who passed the test and an average test score 5 points higher.
Both classes have their negative Nellys, but in first period they aren't negative about getting fired up. Second period has more students who just guessed and blew off the test, but that could be a result of failing to get fired up. I'm just trying to figure out why there's such a big difference when both classes received the same instruction. And if it's something as simple as confidence, how do I give it to all my students, who happen to be sarcastic teenagers (and just so we're clear, I teach them because of it, just so you read my intonation there correctly that it is a term of endearment).
1. As if a test can assess something as broad as knowledge . . . (back)
2. Yes, I did steal that from the 2008 Obama campaign. Not because Obama's awesome, just because it's a really great cheer. I am being completely politically neutral. (back)
Tags:
Assessment,
Reflection,
Regular
Sunday, May 2, 2010
A Conversation With My Wife
Yesterday my partner Jennie and I were walking home from the thrift store. We'd been looking for a pizza slicer. We've been cutting pizza with a knife for almost two years now. They didn't have one. Or a cutting board or a rolling pin - we've been using old wine bottles. But that's not the point of the story.
"So what do you still have to work on?" Jennie asked me. We were trying to figure out when to go to Target to liberate (as in buy) our pizza slicer.
"Well, I need to decide if I'm going to record my students' writing this weekend or not," I said. This solicited a slight internal groan from my partner. We live in a 436 sq ft apartment, and I can either record a reading in the living room or the bedroom. Both locations are audible at different levels throughout the apartment. While my students are producing some great stuff this semester, there's only so many persuasive essays a spouse is willing to listen to in a weekend when she has her own work to complete for grad school. Jennie is always completely supportive and totally tolerate of this process - she gave me the idea of responding to students writing this way after all - but I can imagine it would get old. Still, I persisted.
"I can't decide if I should do it for their rough drafts or later on. I want them to complete some meaningful revision, and I want them to do a peer revision, but they just tell each other that everything in their writing is great."
Jennie nodded sympathetically. I continued.
"It's like I need to teach them to peer revise . . . oh."
Needless to say, I didn't record their writing this weekend.
"So what do you still have to work on?" Jennie asked me. We were trying to figure out when to go to Target to liberate (as in buy) our pizza slicer.
"Well, I need to decide if I'm going to record my students' writing this weekend or not," I said. This solicited a slight internal groan from my partner. We live in a 436 sq ft apartment, and I can either record a reading in the living room or the bedroom. Both locations are audible at different levels throughout the apartment. While my students are producing some great stuff this semester, there's only so many persuasive essays a spouse is willing to listen to in a weekend when she has her own work to complete for grad school. Jennie is always completely supportive and totally tolerate of this process - she gave me the idea of responding to students writing this way after all - but I can imagine it would get old. Still, I persisted.
"I can't decide if I should do it for their rough drafts or later on. I want them to complete some meaningful revision, and I want them to do a peer revision, but they just tell each other that everything in their writing is great."
Jennie nodded sympathetically. I continued.
"It's like I need to teach them to peer revise . . . oh."
Needless to say, I didn't record their writing this weekend.
Tags:
Portfolios,
Reflection,
Regular,
Writing,
Writing Workshop
Wednesday, April 21, 2010
Two of My Colleagues for Your PLN
Please welcome my colleague and mentor Tonya Arnold to the blogosphere. It sounds like while she won't be writing exclusively about education, readers will luckily find it in the mix. From having worked with her, I can recommend those posts unreservedly. (I'm sure the others will make for good reading too.)
I also have yet to mention Dusty Humphrey, who teaches across the hall from me. Daily he invites students into lively, spirited, and enlightening discussion and debates on the literature they read. Be forewarned: if you work for The Man, reading his posts could result in your head exploding.
I also have yet to mention Dusty Humphrey, who teaches across the hall from me. Daily he invites students into lively, spirited, and enlightening discussion and debates on the literature they read. Be forewarned: if you work for The Man, reading his posts could result in your head exploding.
Posted by
Ben Bleckley
at
7:04 PM
from
Hillside - Northwest District, Portland, OR 97210, USA
1 comments
Tags:
Regular,
St. Helens High School
Thursday, April 1, 2010
Mantra 3
Be less helpful. Don't give your own opinion. If a student asks "is that right?", throw the question back at them. Students should figure stuff out by the conversation they create together, without any sort of pitcher/cup (teacher/student), water/filter (knowledge/teacher) metaphor.
And good metaphors shouldn't have to be explained by the author. Whoops.
See Dan's post for more.
And good metaphors shouldn't have to be explained by the author. Whoops.
See Dan's post for more.
Tags:
Critical Thinking,
mantra,
Pedagogy,
Regular
Friday, March 26, 2010
One Trick Pony
One of the better lessons I've taught was the first in my middle school practicum.
It was a summer course, and so we were assigned to the district's summer school program. It was a group of about 30 students repeating 8th grade. Another student and I were assigned to the two teachers who co-taught the class. Although some of the students were going into 8th grade and wanted a head-start, the majority were not interested in listening to me teach grammar, or anything else. Nonetheless, I was asked to teach my first lesson on sentence fragments and run-ons.
It was the summer. I had some free-time, and I was eager to please. So I spent a lot of time planning. I made a pretty decent graphic organizer. I asked my fellow undergrad if she could play a small part in the lesson. I prepared a formal assessment. For my first lesson, I was doing alright. Then I created a superhero alter ego to keep the students' attention.
The day for my lesson came around and we listened to Skee-Lo's version of "The Tale of Mr. Morton" before I presented a few sentences and we identified their subjects and predicates. It was then that my colleague, AKA the Preposition Punk, wearing a clever construction paper mask (she was in drama, if I remember right) snuck up to the board and added "When" to the beginning of a sentence:
The students were dumbfounded, but I definitely had their attention. I heroically slammed a comma after the dependent clause and added an indepent clause to complete the sentence and save the day.
Fast-forward 5 years. I still teach that lesson maybe once a year. But it's the only lesson I teach that has that injection of edutainment right to the jugular. The only one? What's up with that? Not that every lesson needs to have that element of theater to it. It needs all the pedagogy my other lessons have, including things I missed that first time like a preassessment and differentiation. But one lesson a year that keeps students' attention like that one, that creates murmurs throughout the hallways inbetween periods, that get students anticipating my class, is no where near enough.
It was a summer course, and so we were assigned to the district's summer school program. It was a group of about 30 students repeating 8th grade. Another student and I were assigned to the two teachers who co-taught the class. Although some of the students were going into 8th grade and wanted a head-start, the majority were not interested in listening to me teach grammar, or anything else. Nonetheless, I was asked to teach my first lesson on sentence fragments and run-ons.
It was the summer. I had some free-time, and I was eager to please. So I spent a lot of time planning. I made a pretty decent graphic organizer. I asked my fellow undergrad if she could play a small part in the lesson. I prepared a formal assessment. For my first lesson, I was doing alright. Then I created a superhero alter ego to keep the students' attention.
The day for my lesson came around and we listened to Skee-Lo's version of "The Tale of Mr. Morton" before I presented a few sentences and we identified their subjects and predicates. It was then that my colleague, AKA the Preposition Punk, wearing a clever construction paper mask (she was in drama, if I remember right) snuck up to the board and added "When" to the beginning of a sentence:
When Mr. Morton walked to the store.I read the complete sentence now transformed into a dependent clause to the class and proclaimed, "this looks like a job for CAPTAIN COMMA!" while simultaneously ripping open my button-up shirt (which was no longer button-up; I'd replaced the buttons with velcro) to reveal a t-shirt emblazoned with this shield:
The students were dumbfounded, but I definitely had their attention. I heroically slammed a comma after the dependent clause and added an indepent clause to complete the sentence and save the day.
When Mr. Morton walked to the storeI handed out the graphic organizer and had them take a crack at it.., he bought a gallon of milk.
Fast-forward 5 years. I still teach that lesson maybe once a year. But it's the only lesson I teach that has that injection of edutainment right to the jugular. The only one? What's up with that? Not that every lesson needs to have that element of theater to it. It needs all the pedagogy my other lessons have, including things I missed that first time like a preassessment and differentiation. But one lesson a year that keeps students' attention like that one, that creates murmurs throughout the hallways inbetween periods, that get students anticipating my class, is no where near enough.
Monday, March 15, 2010
Mantra 2
About three weeks ago, I began presenting my students with a learning goal at the beginning of each class period. "Make predictions when reading a text." "Identify and discuss settings, character, conflict and plot in selected micro fiction." "Describe the concept of 'exploding a moment.'"
I attended a workshop at the end of February and one of the sessions was Student Self-Assessment and Goal Setting. The related book, Seven Strategies of Assessment for Learning, has been added to my summer reading list.
The seven strategies are grouped into three overarching concepts:
Where Am I Going?
1. Clear learning targets
2. Models of strong & weak work
Where Am I Now?
3. Offer regular, descriptive feedback
4. Teach students to self-assess & set goals
How Can I Close the Gap?
5. Design lessons that focus on one learning target at a time
6. Revision is focused
7. Students track their progress and self assess (see Dan Meyer and his updated concept checklist)
It was emphasized that the last five won't work without the first two already implemented. But I could do better about telling students exactly what I want them to "get."
So, numbers one and two, check and check. This week's mantra starts on number four:
I attended a workshop at the end of February and one of the sessions was Student Self-Assessment and Goal Setting. The related book, Seven Strategies of Assessment for Learning, has been added to my summer reading list.
The seven strategies are grouped into three overarching concepts:
Where Am I Going?
1. Clear learning targets
2. Models of strong & weak work
Where Am I Now?
3. Offer regular, descriptive feedback
4. Teach students to self-assess & set goals
How Can I Close the Gap?
5. Design lessons that focus on one learning target at a time
6. Revision is focused
7. Students track their progress and self assess (see Dan Meyer and his updated concept checklist)
It was emphasized that the last five won't work without the first two already implemented. But I could do better about telling students exactly what I want them to "get."
So, numbers one and two, check and check. This week's mantra starts on number four:
At the end of each class, revisit the goals and ask students to write on their exit slips how they feel they're doing on each one.
Tags:
Assessment,
mantra,
Regular
Subscribe to:
Posts (Atom)