One of the hallmarks of middle level education is the process of understanding students and responding to their needs. In recent years, researchers have stressed the role of assessment in this process. As expectations for accountability have increased, suggestions for improving assessment have proliferated to the point that many educators feel overwhelmed. To help focus attention on central issues, authors of a recent Association for Middle Level Education (AMLE) research summary reviewed investigations of the uses of assessment in middle level education (Capraro et al., 2011). Their report emphasized the essential role of formative assessment in providing data acquired to assist and manage student learning. They concluded that four aspects of formative assessment were fundamental: (a) questioning, (b) feedback, (c) peer assessment, and (d) self-assessment. This research summary extends the general analysis of assessment to examine ways that successful teachers incorporate formative assessment with feedback in their instructional practices.
Connecting Assessment and Instruction
Tenets of This We Believe addressed:
- Varied and ongoing assessments advance learning as well as measure it.
Comprehensive reviews of research on formative assessment have emphasized the fact that it occurs continuously in the flow of instruction (Ruiz-Primo, 2011; Shavelson et al., 2008). Assessments range from spontaneous and informal to comprehensive and formal. “Where a particular formative assessment practice falls on the continuum depends on the amount of planning involved, its formality, the nature and quality of the data sought, and the nature of the feedback given to students by the teacher” (Shavelson et al., 2008, p. 300).
Shepard, Hammerness, Darling-Hammond, and Rust (2005) defined formative assessment as “assessment carried out during the instructional process for the purpose of improving teaching or learning” (p. 275). The key to formative assessment effectiveness is the extent to which teachers can use insights from assessments to guide instruction and provide feedback. Shepard and her colleagues identified the essential elements of the formative process by drawing on three basic questions from Atkin, Black, and Coffey (2001): “(a) where are you trying to go? (b) where are you now? and (c) how can you get there?” (p. 278).
In an extensive review of the literature on formative assessment and learning processes, Black and Wiliam (2009) identified five strategies essential to the integration of assessment with instruction:
- Clarifying and sharing learning intentions and criteria for success.
- Engineering effective classroom discussions and other learning tasks that elicit evidence of student understanding.
- Providing feedback that moves learners forward.
- Activating students as instructional resources for one another.
- Activating students as the owners of their own learning. (p. 8)
While successful practices may vary by subject and style, good formative assessment requires a clear sense of what the lesson is trying to accomplish and an accurate interpretation of students’ responses to understand what students know at that moment. The other essential requirement is feedback.
Assessing Understanding and Providing Feedback
Shepard and her colleagues (2005) noted, “One of the oldest findings in psychological research (Thorndike, 1931/1968) is that feedback facilitates learning” (p. 287). As evidence of the importance of feedback, they referred to a comprehensive meta-analysis by Kluger and DeNisi (1996) that showed that the average effect size of feedback on achievement as .40 (as cited in Shepard et al., 2005). Successful teachers understand the importance of feedback and use teachable moments in lessons as opportunities to guide learning. As Black and Wiliam (2009) emphasized:
In formulating effective feedback, the teacher has to make decisions on numerous occasions, often with little time for reflective analysis before making a commitment. The two steps involved, the diagnostic in interpreting the student contribution in terms of what it reveals about the student’s thinking and motivation, and the prognostic in choosing the optimum response: both involve complex decisions, often to be taken with only a few seconds available. (p. 13)
Shepard and colleagues stressed the importance of creating a supportive climate for learning in which students trust the teacher and each other to provide constructive feedback.
This means that feedback must occur strategically throughout the learning process (not at the end when teaching on that topic is finished); teacher and students must have a shared understanding that the purpose of feedback is to facilitate learning; and it may mean that grading should be suspended during the formative stage. Given that teachers cannot frequently meet one-on-one with each student, classroom practices must allow for students to display their thinking so the teacher will be aware of it, and for students to become increasingly effective critics of their own and each other’s work. (p. 288)
Types of Feedback
In their instructional practices, successful teachers create “moments of contingency,” in which they can find out what students understand and do not understand about specific concepts (Black & Wiliam, 2009, p. 10). A study by Heritage, Kim, Vendlinski, and Herman (2009) demonstrated how complex the formative process is, even with veteran teachers. Experienced participants in an intensive professional development course on teaching mathematics could readily identify the elements of mathematical problem solving involved and gaps in students understanding. Almost all of the participants struggled to identify options for intervention, however.
Written feedback. The impact of written feedback varies considerably, depending on the instructional context (Bruno & Santos, 2010; Werderich, 2006). To examine specifically the characteristics of written feedback that promoted better learning, Bruno and Santos (2010) analyzed content specific interactions in eighth grade physics and chemistry classes in a Portuguese middle school. Researchers identified three categories of written feedback: corrections, explanations, and complements. Corrections included comments such as ‘‘You wrote that a solid appeared. Is there a formation of a solid in every chemical reaction? Try to give a general definition of chemical reaction” (p. 114). Feedback asking for explanations included comments such as ‘‘Why do you think water is a good solvent?” and ‘‘How do we know if the velocity of a chemical reaction is greater or lower?” (p. 114). An example of feedback asking for complementing the answer is ‘‘Analyze the formulated hypotheses and the experiments done by the other groups and try to identify other factors that have been maintained constant” (p. 114).
Bruno and Santos (2010) found that written feedback that asked for explanations had the least impact on improving student learning. The timing of the feedback and the consistency of feedback were crucial to students’ use of it to improve on their experiments. The researchers concluded, “The success of feedback depends on the teacher’s knowledge of the difficulties, skills, and personality of each student regarding a particular situation” (p. 119).
Assessment conversations. Although most teachers use group discussions to learn more about students’ understanding, the most successful teachers do so with clearer focus and reflection that is more sophisticated. Ruiz-Primo (2011) studied the nature of assessment conversations extensively in a series of reports. She concluded that successful teachers develop assessment conversations using their knowledge of subject matter and of students. More specifically, successful assessment conversations incorporate two central elements:
- Clarifying learning goals. The learning goal or target that is in the teacher’s mind has the potential to be included as part of an assessment conversation through the following actions:
—Reminding the students about the learning goal or learning target during the conversation
—Reminding students about the purpose of an activity
—Connecting the discussion (conversation) to the learning goal - Collecting/eliciting information: questioning. The strategies used to collect information that makes students’ thinking evident to the teacher are critical in determining the nature of the dialogue. Questioning is a strategy that easily can be used to start and continue an assessment conversation. For assessment conversation purposes, teachers’ questions should be:
—Open-ended
—Tapping into diverse types of knowledge. (Ruiz-Primo, 2011)
A report by Ruiz-Primo and Furtak (2006) provided several detailed examples of assessment conversations. In one illustration, the teacher was leading a discussion with students about their interpretations of data related to displaced volume. When a student noted that objects floated better in saltwater, the teacher began a demonstration using beakers and hardboiled eggs. After she asked students to observe and repeat the experiment in different ways, students agreed that eggs float in saltwater and sink in tap water. A student then shares an example of floating in the ocean at a beach and the discussion continues:
Teacher: What does salt have to do with it?
Student: Salt makes it more dense.
Teacher: What does that mean, salt makes it more dense?
Student: Thicker.
Teacher: Thick is kind of, not a science word but, it’s a description. Thicker. Let’s go back to what Sandy was saying. She said something about, it has more what in it?
Student: Matter. (Ruiz-Primo & Furtak, 2006, p. 224)
As Ruiz-Primo and Furtak noted, this teacher anticipated the example of saltwater, used students’ comments to show their thoughts, and provided a prompt that led to a working explanation of density. (Please see the annotated reference for more information).
Conclusion
With so much information on topics related to assessment, middle level educators may use research with successful teachers to identify the most important elements of the process. This research summary draws from comprehensive reports and empirical research to provide a sense of direction. Although this summary synthesizes only a small portion of the research, priorities for practice seem clear. When teachers want to integrate assessments with lessons most effectively, they need to begin by sharpening their answers to the three central questions: Where are you trying to go? Where are you now? How can you get there? With a more detailed developmental progression of concepts, teachers can plan for moments of contingency in their lessons, generating questions they can use to elicit students’ understanding more precisely. Stronger content analyses and more focused questions can then frame feedback that is more effective, especially when the classroom climate is supportive.
With these three questions clearly in mind, almost any activity in the classroom can provide an opportunity for informal assessment. The types of assessment conversations that Ruiz-Primo (2011) has chronicled clearly require advanced preparation. Less formal conversations can provide glimpses of students’ thoughts and context for feedback when teachers have internalized progressions of concepts and questions to accompany them. Informal written assessments such as short essays or tickets out the door can be prompts for brief conversations. Work samples can provide context for debriefing discussions in which students articulate their ideas and teachers offer feedback. Understanding these key elements may thus provide a working framework for connecting assessment and instruction in teachers’ classroom practices.
References
Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the National Science Education Standards. Washington, DC: National Academies Press.
Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21, 5–31.
Bruno, I., & Santos, L. (2010). Written comments as a form of feedback. Studies in Educational Evaluation, 36(3), 111–120.
Capraro, R. M., Roe, M. F., Caskey, M. M., Strahan, D., Bishop, P. A., Weiss, C. C., & Swanson, K. W. (2011). Research summary: Assessment. Retrieved from: http://www.amle.org/Research/ResearchSummaries/Assessment/tabid/2580/Default.aspx
Heritage, M., Kim, J., Vendlinski, T., & Herman, J. (2009). From evidence to action: A seamless process in formative assessment? Educational Measurement: Issues and Practice, 28(3), 24–31.
Ruiz-Primo, M. A. (2011). Informal formative assessment: The role of instructional dialogues in assessing students’ learning. Studies in Educational Evaluation, 37(1), 15–24.
Ruiz-Primo, M. A., & Furtak, E. M. (2006). Informal formative assessment and scientific inquiry: Exploring teachers’ practices and student learning. Educational Assessment, 11(3 & 4), 205–235.
Shavelson, R. J., Young, D. B., Brandon, P. R., Furtak, E. M., Ruiz-Primo, M. A., Tomita, M. K., & Yin, Y. (2008). On the Impact of curriculum-embedded formative assessment on learning: Collaboration between curriculum and assessment developers. Applied Measurement in Education, 21, 295–314.
Shepard, L., Hammerness, K., Darling-Hammond, L., & Rust, F. (2005). Assessment. In L. Darling-Hammond & J. Bransford, (Eds.), Preparing teachers for a changing world (pp. 275–326). San Francisco, CA: Jossey-Bass.
Werderich, D. E. (2006). The teacher’s response process in dialogue journals. Reading Horizons, 47(1), 47–74.
Annotated References
Ruiz-Primo, M. A. & Furtak, E. M. (2006). Informal formative assessment and scientific inquiry: Exploring teachers’ practices and student learning. Educational Assessment, 11(3 & 4), 205–235.
Researchers reported the results of a case study conducted with four middle school science teachers using assessment conversations. Teachers interpreted students’ evolving understanding of concepts related to mass, volume, and density and used these insights to modify instruction. Assessment conversations were structured as four-step cycles: the teacher elicits a question, the student responds, the teacher recognizes the student’s response, and then uses the information to guide student learning (ESRU). Part of a larger quasi-experimental study of matched pairs of teachers addressing the same content with differing approaches, researchers videotaped 49 lessons across four connected units to evaluate teachers’ use of the ESRU cycle of lesson conversation. Pre/posttests of conceptual understanding and embedded assessments provided measures of achievement. The most sophisticated uses of the ESRU cycle featured metacognitive questions such as “Why do you think so?” as well as questions that elicited comparisons of explanations with feedback about thought processes. Analysis of gain scores across the units showed that students made the highest gains with teachers who enacted assessment conversations more consistently.
Note: This article included specific examples of conversational dialog for readers who want to understand assessment conversations more specifically.
Chin, C., & Teou, L. (2009). Using concept cartoons in formative assessment: Scaffolding students’ argumentation. International Journal of Science Education, 31(10), 1307–1332.
Created by Keogh and Naylor (1999), concept cartoons are a strategy to help students initiate discussion and debate in which cartoon characters engage in dialogue on specific scientific topics. Visually engaging cartoons can represent the scientific knowledge and common misperceptions of a particular concept. In this study the researchers examined the dialogue and written responses of students to four different concepts cartoons. Students aligned themselves with one of the characters and explained why they did so, either verbally or in writing. The researchers recommend the strategy as a way of scaffolding assessment conversations when the students are justifying their positions verbally. In addition the researchers found that the discussion templates created for this study gave structure and focus to the scientific conversation among students.
Note: This study gives specific examples of concept cartoons, discussion templates, students’ drawings, whole-class teacher questioning and other tools used to scaffold the small-group discussions.
Reference
Keogh, B., & Naylor, S. (1999). Concept cartoons, teaching and learning in science: An evaluation. International Journal of Science Education, 21(4), 431-446.
Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education Policy Analysis Archives, 18(19). Retrieved from http://epaa.asu.edu/ojs/article/view/809
Young and Kim reviewed more than 100 empirical studies of teachers’ use of assessments that were conducted between 1980 and 2008. Results underscored the complexity of assessment processes. Over time, teachers have had greater access to data sources, and administrators have placed greater emphasis on using data to guide instruction. How well teachers accomplish this goal varies considerably, however. Key factors are individual (teachers’ understanding of student learning and subject matter, teachers’ capacity for interpreting and using information) as well as contextual (the nature of data systems, access to expertise, time for studying data, the relevancy of the data itself). Evidence suggested that in spite of increasing emphasis on assessment, teachers develop assessment primarily through trial and error. The researchers concluded, “Whether and how data inform instruction depends on teachers’ assessment practices, the data that are relevant and useful to them, the data they typically have access to, and their content and pedagogical knowledge” (p. 1).
Recommended Resources
Assessment and Accountability Comprehensive Center. (n.d.). Formative assessment. Retrieved from http://www.aacompcenter.org/cs/aacc/print/htdocs/aacc/formative_assessment.htm
Dodge, J. (2009). 25 quick formative assessments for a differentiated classroom. New York, NY: Scholastic.
Gregory, G. H., & Chapman, C. (2007). Differentiated instructional strategies: One size doesn’t fit all. Thousand Oaks, CA: Corwin Press.
Heritage, M. (2010). Formative assessment: Making it happen in the classroom. Thousand Oaks, CA: Corwin Press.
Tomlinson, C. A. (2003). Fulfilling the promise of the differentiated classroom. Alexandria, VA: Association of Supervision and Curriculum Development.
Authors
David Strahan is the Taft B. Botner Distinguished Professor in the School of Teaching and Learning at Western Carolina University. He is a member of AMLE’s Research Advisory Committee and Executive Advisor for the AERA Middle Level Education Research Special Interest Group. His major research interests are responsive instructional practices and teachers’ professional growth.
Carrie Rogers is an Assistant Professor in the School of Teaching and Learning at Western Carolina University. Her interests are examining the learning trajectories of teacher candidates and the development of agency and leadership capacity of novice teachers.
Citation
Strahan, D., & Rogers, C. (2012). Research summary: Formative assessment practices in successful middle level classrooms. Retrieved from http://www.amle.org/BrowsebyTopic/Research/Article/
TabId/198/ArtMID/696/ArticleID/108/Formative-Assessment-Practices.aspx