Progress Monitoring

Discussion in 'General Education' started by NHteacher16, May 29, 2012.

  1. NHteacher16

    NHteacher16 New Member

    Joined:
    May 29, 2012
    Messages:
    2
    Likes Received:
    0

    May 29, 2012

    I am a graduate student working towards my degree in elementary education. Quick question for all the teachers out there. There has been a lot of talk about using programs such as AimsWeb, NWEA, Renaissance Learning and Symphony Learning to do progress monitoring and RTI testing. What are your opinions on these programs? Out of the information/data you get back, what is the most useful to you as a teacher? Thank you!
     
  2.  
  3. jwteacher

    jwteacher Cohort

    Joined:
    Mar 19, 2010
    Messages:
    569
    Likes Received:
    3

    May 29, 2012

    My district uses AimsWeb. I think it does a decent job measuring a student's fluency and comprehension progress over a period of time (when weekly benchmarked), I just wish it was a more intuitive program.
     
  4. giraffe326

    giraffe326 Virtuoso

    Joined:
    Jan 2, 2006
    Messages:
    7,075
    Likes Received:
    14

    May 29, 2012

    We use AIMSweb and mClass (Reading 3D and DIBELS Next). I don't think maze/Daze tests are a true test of comprehension. After that, I have no complaints with the programs. My complaint is that my kids don't take it very seriously.
     
  5. knitter63

    knitter63 Groupie

    Joined:
    Jun 26, 2007
    Messages:
    1,396
    Likes Received:
    3

    May 29, 2012

    We use Aimsweb, and I do not think it gives an accurate picture of what the kids can or cannot do. For example, I had a student this year that did not like to be pulled-into the hallway- to read with complete strangers(other teachers from the district she was not familiar with). She scored poorly. The administrators immediately put her into tutoring to help her pass the state test.
    In 4th grade, she scored in the advanced level in reading with NO tutoring whatsoever. On all other benchmark testing we give, she scored in the advanced level.
    This poor girl missed recess 3 times a week for tutoring she did not need. And guess what? When the tutor exit tested her, she said, "hmmm, K is reading at a level 70. (2 years above grade level). I didn't know she read so well!"
    I had no words for her.
     
  6. waterfall

    waterfall Maven

    Joined:
    Feb 5, 2011
    Messages:
    5,812
    Likes Received:
    658

    May 29, 2012

    I agree...the MAZE is such a simple assessment, all they have to do is pick the correct word! To me, that is vocabulary knowledge, not reading comprehension. They could have absolutely no clue what that story is about and still get a perfect score because they know which word sounds right in the sentence. However, we have been unable to find a quick, standardized measure that actually does measure comprehension. For some kids we've had to go to having them read a story and answer 10 comprehension questions, but this is a ton of extra work for the teacher as she has to make everything, and it's not standardized since each teacher is making up their own. There are other standardized measures, but they're more long and involved- not something that can be done quickly as progress monitoring.

    Besides the MAZE factor, I like aimsweb as a system a lot. Once you understand the program, it's very easy to track the student's progress since it does most of the work for you- you just have to plug in scores and it will make a graph for you that shows their rate of progress. I like the math assessments a lot. This is great/easy evidence to collect for RtI. Unfortunately, many teachers in my building find the program to be overwhelming and won't really use it to it's full capacity (they won't plug in the scores to create the graph). Once you know how to do it, it's really easy- but it can look like a lot at first. I have my students keep track of their own progress on sticker charts as well, so I've never had any problems with motivation to complete the test or do well on the test- believe it or not they LOVE progress monitoring days. They'll do anything for a sticker! This year I had planned to do something else for 4th-5th grade thinking they wouldn't buy into the sticker thing, but when they saw my charts set up for the younger kids they were all saying, "Where are my stickers???" We stopped collecting data last week since we had to turn everything in, and they were really mad that they didn't get to do their progress monitoring tests on Friday like we usually do.
     
  7. giraffe326

    giraffe326 Virtuoso

    Joined:
    Jan 2, 2006
    Messages:
    7,075
    Likes Received:
    14

    May 29, 2012

    We don't have a choice- we HAVE to input scores. I got a nasty e-mail from our Title I teacher once because I was 2 weeks behind entering the pm scores. She apparently needed to know the child's progress RIGHT THEN.
    The TRC through mCLASS's Reading 3D program is a bit better for comprehension, but it can take f.o.r.e.v.e.r. to test each kid.
     
  8. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 30, 2012

    I'm not sure this is a problem with AIMSweb, but a problem with administration. Any test or assessment can be invalidated if given under the wrong conditions, or if some variable otherwise invalidates the results. Doesn't mean the assessment is bad, just means it wasn't used the right way. It's like saying a car is a bad car because someone drove it on a dirt road and it got a flat tire.
     
  9. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 30, 2012

    The topic of maze has come up before, and I think it's important to take it for what it's worth. It's definitely not an assessment that will find a child's "ceiling" in terms of comprehension, and it's very possible that one can score well on the assessment and still have significant difficulty with comprehension. However, it does an okay job with assessing whether a child has at least a minimal amount of comprehension skills.

    Part of the inherent problem with measuring comprehension is it isn't a "tight" of a construct as reading fluency. While there are component skills of both, reading fluency can be expressed with a behavioral description (e.g., x words correct/minute) and involves subcomponents which build up to reading fluency (a more broad skill). Comprehension, on the other hand, is category of a variety of skills which aren't dependent on others for success. In other words, you could be great at summarizing a paragraph, but be bad at identifying cause & effect, and vice versa. In addition, there are skills involved that aren't directly related to reading, but are more general thinking/analytical skills. For example, cause & effect really isn't a skill that's only relevant to reading, but is a critical thinking skill that exists independent of reading (i.e., you could ask a child to identify cause/effect in an orally administered passage).

    All that is to say that we'll likely never see a singular assessment for comprehension like we do for oral reading fluency, so we have to use a variety of assessments to get different pictures of what's going on. Each one of those assessments will necessarily be limited and not provide a full picture, but taken together each can provide useful information.

    Also, to address the comment of maze assessing vocab rather than comprehension, a few thoughts: first, the "big 5" areas of reading aren't completely separate, so part of comprehension of a passage involves vocabulary. To the extent that vocab is understood or not understood, comprehension will be effected. Because they're so integrated, vocab will most likely always be a factor contributing to a child's score on an measure of comprehension. That maze also does so is not a limitation of AIMSweb or maze passages, but just a function of most any comprehension assessment.

    More specifically with the vocab vs. comprehension issue in maze specifically, identifying the correct word to fit in a passage does measure comprehension in that a child must accurately comprehend at least the sentence or phrase containing the word to identify the correct word, as a properly constructed maze passage should include at least one word that would grammatically fit in the passage, but not semantically, therefore reducing the possibility that the child could select a word simply because it grammatically fits.
     
  10. waterfall

    waterfall Maven

    Joined:
    Feb 5, 2011
    Messages:
    5,812
    Likes Received:
    658

    May 30, 2012

    I've never had a kid that could read fluently score poorly on the MAZE. Of course, kids that can't really read don't do well, but that's obviously because they can't read it. Here's a sentence from one of our 3rd grade ones: Then he (finished, fishbowl, watched) the children go off to school. I think being able to tell that "Then he finished the children go off to school" or "Then he fishbowl the children off to school" doesn't sound right is an extremely low level skill, not even touching the level of what they'd be asked to do in class. I think having to read a passage and answer questions would be such a better measure of their ability, but we haven't been able to find anyone that has published a standardized version of that. For kids we want to identify for reading comprehension, we end up having to make our measures and progress monitor them to show lack of growth as they all do great on MAZE. I just got one kid in by having him read Reading A-Z stories and answering the comprehension questions that follow- he was above grade level on MAZE but couldn't do the Reading A-Z tests at all (never got above a 3/10) even at 2 grade levels below his actual grade level. I made my own graph of his scores and it would take the student 30-40 minutes to complete this each week, which was frustrating for everyone involved. It was the only thing we could think of though that was somewhat standardized (not teacher created) and would actually show his lack of skills. Had we just used MAZE he wouldn't have qualified to go to sped evaluation based on his data. We just did his meeting last week and he tested in the 6th percentile for reading comprehension.
     
  11. Lisabobisa

    Lisabobisa Companion

    Joined:
    May 6, 2011
    Messages:
    215
    Likes Received:
    36

    May 30, 2012

    I have had students who read fluently, but score poorly on the MAZE. One student reads at the 90th percentile for fluency, but is at the 10th percentile for the MAZE comprehension. Granted, I'm not advocating the MAZE as the end-all, be-all of rating fluency. I saw a test being given (DIBELS possibly, but don't quote me on that) that had the student read the passage, then retell what they read. The teacher marked how many words that correctly retold the story. I liked that version of testing comprehension... to bad I can't remember what it was.

    I really like the math portion of AIMSWeb. This is the type of progress monitoring that I am most familiar.
     
  12. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 30, 2012

    That's largely because reading fluency is highly correlated with comprehension as well, at least on basic levels.

    I agree - that's a pretty awful question. A more helpful question might be:

    "The green car [stopped, raced, fast] at the red light."

    In that sentence, a child who couldn't comprehend the sentence but read fluently and understand the vocab could either select "stopped" or "raced," as either would be grammatically correct (though "raced" might be slightly awkward).

    I agree - I think a more comprehensive assessment involving more skills would be more helpful, and yes - there isn't a good published progress monitoring system I've seen for reading comprehension. Doesn't mean maze doesn't provide some helpful info, but if you had a better system that is more comprehensive, you could probably get rid of maze :).

    That's actually a great idea! Part of the problem with those questions is that they may not be standardized in terms of question type, question order, difficulty level of questions, etc., so there may be more variability across administrations related to assessment variation (as opposed to the child's skill level variation), but it still sounds better than a lot of what happens.

    Yeah, and I wouldn't ever use maze as a standalone assessment of comprehension, but again - its lack of comprehensiveness isn't necessarily evidence of it's complete lack of worth, but I get your point, and the large problem here is that there just aren't other good PM measures related to comprehension. Maybe you could create one, publish it, get rich, then send us all a check because you like us :).
     
  13. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 30, 2012

    Yeah, it's the "oral retell" portion of the "oral reading fluency" test on DIBELS. Other assessments may use it as well.

    There is some statistical inadequacy with retell, probably some construct validity issues as well, and an issue with confounding variables (e.g., expressive language), but not completely worthless.
     
  14. a2z

    a2z Maven

    Joined:
    Sep 16, 2010
    Messages:
    5,592
    Likes Received:
    1,481

    May 30, 2012

    :thumb:
     
  15. leighbball

    leighbball Virtuoso

    Joined:
    Jul 6, 2005
    Messages:
    7,507
    Likes Received:
    1

    May 30, 2012

    We use RtI and aimsweb in my district, but this year we only used the RCB-M tests (for 2nd grade, that's just the oral reading fluency). I weekly progress monitor 5 students who tested as below or well below grade level. I benchmark all kids 3 times a year...I'm in the midst of doing our spring benchmarks now!

    I love how aimsweb tracks the data for me, and its helped me to prove when I feel students need to be evaluated for services. However, its frustrating to me that not many teachers in my building use the data for anything but aimsweb. I have some kids who test well in RtI but do poorly on DRA2 and in groups because of comprehension and not fluency issues. One of our building goals next year is to work on analyzing data to help guide instruction.
     
  16. pwhatley

    pwhatley Maven

    Joined:
    Apr 14, 2007
    Messages:
    5,276
    Likes Received:
    1

    May 30, 2012

    We are required to use Aimsweb for Math and writing (at the first grade level), and DIBELSnext for reading. This year, I felt like I spent so much time benchmark testing and progress monitoring (without any help,except for DIBELSnext benchmark day), that I was spending most of my time TESTING instead of TEACHING. Please remember that all of the beginning first grade tests for Aimsweb math are given one-on-one, so if you have 25 students, that's a lot of time for testing each child, while the other 24 are doing "something relevant." It's an interesting exercise in classroom management, for sure! I think it would have helped immensely, though, if we had been notified BEFORE the school year started that we would be doing universal screening (our former principal didn't have us doing it), so that we could work it into our scheduling. I hope to do better next year, but it's still A LOT!!
     
  17. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 31, 2012

    That's interesting that you're doing math 1:1 - it's definitely possible to do it whole group - is that a building policy? Reading of course needs to be administered individually, but not math.

    Also, are you all progress monitoring all kids, or just targeted kids at risk for or already experiencing difficulty?
     
  18. knitter63

    knitter63 Groupie

    Joined:
    Jun 26, 2007
    Messages:
    1,396
    Likes Received:
    3

    May 31, 2012

    I agree that it is an administration problem. The original question was our thoughts on it, and I gave mine. I have no problem,other than the MAZE test, with Aimsweb. However, it does not give an accurate picture of what our students are able to do. We do not progress monitor them weekly, or even monthly. We give several (3-4) formative assessments every month, and our kids are simply tested out. They could care a less about CBM's (Aimsweb) when we give it.
    If we simply used Aimsweb to progress monitor our students I think we would have better results.
     
  19. EdEd

    EdEd Aficionado

    Joined:
    Jan 12, 2011
    Messages:
    3,748
    Likes Received:
    216

    May 31, 2012

    That's definitely a good point - would indeed be nice to cut out assessments that weren't as useful!
     
  20. NHteacher16

    NHteacher16 New Member

    Joined:
    May 29, 2012
    Messages:
    2
    Likes Received:
    0

    May 31, 2012

    Thank you for all the responses! Are there any pieces of data on the reports that would help you understand it better? Or ones that make it more difficult to understand? For example: a graph comparing your students to each other or to the state? Or the standard score? Grade equivalent? Etc...
     

Share This Page

Members Online Now

  1. AshleyNE,
  2. tjw0001,
  3. ready2learn,
  4. vickilyn,
  5. waterfall,
  6. ThereisnoTry,
  7. TrademarkTer,
  8. eanyills,
  9. Ms.Holyoke
Total: 398 (members: 13, guests: 340, robots: 45)
test