top of page

Data Analysis

Pre & Post Benchmark

Fountas and Pinnell (F &P) benchmarking system is a complete literacy set with select texts ranging from reading level A-Z that are checking for student’s reading accuracy, self-correction, fluency, and comprehension.  The scores of the benchmarks are then converted into a scale provided to decide if the text is independent, instructional, or hard. It is important to instruct students at their instructional level. If the student scores independent at a level then you move to the next level and have them read again.  The level A text is the lowest level within the system and has simple repetitive phrases and words. As the levels increase, the text becomes more difficult. Students need to be at a level D by the completion of kindergarten to be considered on-level.

 

I benchmarked my students in October, January, and March.  I used the F & P benchmark system because it is required by the district and it provided me with an accurate reading level for each student I was able to monitor their progress toward reaching the goal for the end of kindergarten.  I used the reading level to place students into four reading groups of like performance levels. This allowed me to differentiate my teaching points based on their F & P level. The F & P benchmark also provided me with data and information that I used to decide areas of strengths and weaknesses for the students, and what needed to be addressed during small group time.  The F & P benchmark provided valuable information for planning, but it was time-consuming. Each benchmark was completed one on one, and the texts got increasingly longer as students moved levels. Some students needed to be benchmarked using multiple levels each session, so I blocked out an entire week to benchmark all 18 students in my classroom. Even though I only benchmarked once each quarter, I was continually evaluating students progress through running records and moving students to the next level as needed.

Snip20190314_5.png

In the graphs above, there were three data points for each student.  The three columns represented the Fountas & Pinnell benchmark level for the student in October, January, and March.  The numbers along the y-axis coordinated with students F & P text levels. For example, 1 was equivalent to Pre-A or nonreader, 2 was level A, 3 was level B, and so on.  The data showed that all students grew at least one level from the beginning of the year in October. This included students who scored below grade level as well as above. Students C, D, H, J, and R were all benchmarked at Pre-A level at the beginning of the year.  They were still learning letter names and sounds. They all moved to a level A, or higher. After I began action research, at the second benchmark or yellow bar, 66% of my students showed growth. I believed that the reason for this growth was because of both the natural progression in reading growth as well as purposeful planning implemented during my study.  I used the benchmark, at the beginning of action research, to place students in groups of common guided reading levels and planned teaching points based on errors found within the benchmark. For example, my first focus for all groups was sight words and consonant vowel consonant (CVC) words. The only concerns that I had for students reaching grade level by the end of the year were students C, D, H, and R.  Two of these students received special education services and the other two received support from the reading specialist in the building. They were experiencing guided reading instruction twice a day. I also wanted to be sure to differentiate instruction for all students, even those who were reading at level. For example, student A, B, K, and M benchmarked on-level prior to my action research.  My goal for these students was to build on decoding skills and improve their expression when reading. Using the benchmark data, I recognized that 66% of my students had already met the level D benchmark for the end of kindergarten. The chart below showcased the number of students below, on, and above grade level for the end of the action research study.

Snip20190407_2.png

Running Records

During my action research I collected weekly running records for each student.  I used the running records from each week to plan my teaching points for the following lessons.  I differentiated for each reading group because the skills that they needed to learn and practice were different, as well as the texts they were reading.  The graph below displayed the number of sight word errors for each student per week.  Each color represented a different student in the class. The bars were only visible if the student made one or more sight word errors in that given week.  The data showed a zero for the week if students made zero errors or if they were absent and did not have a running record in that given week. For example, week 5 showed that zero students made sight word errors.  This is because the students have progressed and made zero errors. On the other hand, in week 2, student F, G, I, and R are not represented because they were absent. In week 1, 66% of my students had one or more sight word errors.  After explicit teaching and reviewing of sight words each week the sight word errors decreased. In week 4, only 22% of my students made sight word errors.  In week 4 and 6, I saw a spike in students C, H, and R. These students were in my lowest level reading group and were continually practicing reading and writing sight words each day at the beginning of the lesson.  These students were able to identify the sight words in isolation, but could not while reading text. I also noticed that these students had trouble tracking and often skipped or said a different word that fit, contextually, in the text causing a sight word error. This data showed that by implementing sight word practice within my guided reading group students were making less sight word errors during reading.

Snip20190314_7.png
IMG_5674.JPG

BLUE GROUP (LEVEL A/B)

At the beginning of the year, this group was unable to identify letter names or sounds.   During week 1 of action research, we were explicitly working on letter sounds and sight words.  We used a variety of hands on manipulatives to learn the letter sounds. For example, each child had a pan with magnetic letters and I would say a sound, they would select the letter and move it to the other pan.  When finished they would have to identify and match the letter on the correct spot. We also used dry erase boards, tiles, and interactive games. Because of purposeful planning and practice with letter sounds, there were 0 letter sound errors when decoding CVC words.

PURPLE GROUP (B/C)

With the purple reading group, I noticed that students were using pictures clues as a strategy when decoding and not paying attention to the letters in the words.  After week 1, we spent a lot of time teaching the strategy “lips the fish” meaning when the students got to a word they did not know, they would say the first letter sound to get started.  In week 1, prior to teaching this strategy, each student within this group made at least two beginning sound errors when reading. The graph showed a significant decrease by week 2 and students did not make more than one beginning sound error.  On week 3, Student C shows an increase from 0 errors to 1 error and back down to 0. This beginning sound error that was made on week three was a b/d reversal. I provided the student with an alphabet chart to check the letter and fix the sound.  Students A and D had a break in the data due to being absent an entire week due to illness. When they returned I saw a spike in the data on week 5. This could be because they missed a week of instruction or because we switched from a level B to a level C text on week 4, when they were absent. Overall, the data showed a decrease in beginning sound errors because students were using the “lips the fish” strategy that was taught.  

Snip20190314_9.png

YELOW GROUP (d/E)

Snip20190314_10.png

GREEN GROUP (F/G)

With the green group, the students were reading more than 3 text levels above grade level.  They were able to decode words using the strategies taught at the beginning of the year.  I wanted to continue to work with these students beyond decoding. I noticed when completing running records on week 1 that the students read very fast paced and did not pay attention to dialogue or author marks.  On week 2, I introduced how students should change how they read a sentence based on the punctuation at the end. I modeled this for students and then had them try. By week 3, I had modeled how the readers could change their voice for different characters in the text.  For example, the text we read was called Lost Cat and the students changed their voice when mom and dad were speaking.  This continued from text to text. By the end of my data collection all 4 members of the group were noted to be reading with improved expression.  

With the yellow group, they were reading right above grade level, and the words used in the texts were more difficult.  I taught and modeled a variety of strategies to help decode new words when reading. They learned a new strategy each week during the action research study.  The strategies included eagle eye, lips the fish, chunky monkey, and stretchy snake.  From the running records, I noted that when students reached a word they did not know, they used one of the listed strategies 80% of the time.  The other 20% of errors, students were guessing or saying a completely different word that did not make sense using any of the strategies. The data suggested that students had a library of strategies that they used when decoding new text as they progressed through the reading levels.  

Snip20190321_2.png

CLASSROOM CLIMATE SURVEY

Snip20190312_4.png

The above circle chart shows that not all students agreed that students behaved in class.  In my classroom, there were students who had identified behavioral plans. This data told me that students recognized that behaviors of other students were not appropriate and impacted their learning.  When I analyzed this data point, in relation to my action research, I examined how student behavior impacted my guided reading instruction. During small group when students were displaying behaviors that did not meet expectations it took away from the instruction time.  I had to stop instruction to address the behavior. When the behavior was displayed by someone outside of my small group, from a student working independently, I stopped my teaching and provided my group with fluency passages to continue practicing while I was away to address the behavior.  I was fortunate for my students’ ability to be flexible in the classroom, but they were not receiving intentional and dedicated instruction during these occurrences. On the other hand, when the behavior was occurring within my small group, it was not only affecting the other students’ ability to focus on instruction, but it was also impeding their own learning.  For example, for student C in the data shown above, we did not see as much growth or consistent data; the data is skewed and inaccurate due to disruptions and behaviors displayed by the student. Being aware of student behavior allowed me to build a supply of resources that the students were familiar with and could practice independently to continue learning while I stepped away.  I also worked with other professionals in order to set expectations for how the behaviors and students should be handled when this occurred. I was able to maximize or make up for the time lost due to student behavior by being prepared for its possibility of occurrence.

TRIANGULATion of  DATA

During my research, I used the data collected from the pre-benchmark to select my teaching points for the first week of instruction.  I then collected running records weekly to monitor student progress and made decisions on text levels for each group. I noticed variations in the data collected based on the teaching points that were being used in each group.  The classroom survey showed that students noticed student misbehavior in my classroom. In a running record that was collected on week 6, for a specific student in the blue reading group, the data was inaccurate because the student's behavior caused breaks in the reading and ultimately resulted in ending the text early.   The climate survey explained the F & P level of specific students whose reading was impeded by student behavior. This student had trouble attaining to the text when reading and got frustrated when decoding which explained his below grade level benchmark.  By using the running records from week to week, I was able to gauge where my students would benchmark for an instructional level. The post-benchmark allowed me to reevaluate groups and make changes as needed based on instructional text level, and decide what needed to be the next area of focus for the groups to continue to increase reading achievement. The weekly running records enriched the data collected from F & P Benchmarks by providing me with specific details on each student's reading behavior as well as showing growth throughout the research and that supported the student's overall growth at the end. The post F & P Benchmark confirmed that data that I collected throughout action research. The instructional level, that they were working on, based on running record was confirmed or exceeded based on benchmarks.

IMG_2925.JPG
IMG_3135.JPG
IMG_3490.JPG

REMAINING QUESTIONS

During my action research, I noticed variations in student data.  For example, looking at the graph for sight word errors, student C showed a spike in errors on week 4.  I looked at the running records for each week and on week 4, student C made the same error 6 times in the repetitive text.  I did not correct this error because he did not address the error. This student had a strong visual memory and memorized sight words in isolation, but was still progressing on reading them consistently in context.  Another question that arose during research was, why a dip in students' scores on running record data occurred?  Looking specifically at the beginning sound graph, student C showed a spike in errors from week 3 to 5. I believed that this could be because of two reasons,  we moved up in reading levels on week 4 and this student was absent or the student was not consistently checking the beginning of words. After I analyzed the data collected during action research, I noticed that many students were able to use the strategies that were taught as a first step in decoding unknown words, but they were still making errors.  Going forward I will teach students to look at both the beginning and ending sound to make sure the word is correct. I will also teach them that the word has to make sense. If the word is dog, and the student says dive after seeing the first sound. I will encourage them to look at the picture and read the sentence as a whole to check for meaning.

bottom of page