Not ready to let go of RtI for SLD (part 1)
- ConnectedMTSS
- Dec 22, 2020
- 7 min read
Several years ago, Wisconsin transitioned from the discrepancy model for identification of Specific Learning Disabilities (SLD) to identification through a Response to Intervention (RtI) model. The use of RtI for identification of SLD was not just in the area of reading, it was for all areas of SLD (even oral expression and listening comprehension, ripped the band-aid right off!). The transition from a test/place model to RtI for SLD identification was likely not easy for any district, based on discussions and anecdotes from that time.
During the transition to RtI for SLD, I served as a school psychologist and then a Multi-Tier System of Supports (MTSS) Coordinator. As we navigated the change in the district I served, there were many questions and often, answers were based on speculation of what should work because we were not sure how the proposed answer would work. Thankfully, the majority of referrals for evaluation or intervention were based on reading difficulties. Providing RtI for reading was more defined whereas math and writing proved to be more challenging. On the other hand, more students struggled with reading skill acquisition. Most importantly, students were benefiting from the intervention in the area of their weakness, prior to evaluation compared to the previous test and place model.
The SLD Rule using RtI never required districts to adopt a systemwide MTSS/RTI framework. If a district wanted to identify students in the area of SLD, the minimum requirement was two interventions per area and valid and reliable progress monitoring. However, as RtI/MTSS became more widespread, only using RtI to identify Specific Learning Disabilities seemed inefficient and most districts developed and deployed system-level RtI and later MTSS practices. Evolving from RtI for SLD to RtI to MTSS for prevention appeared to be more effective and efficient.
Last year, I relocated to a state where RtI/MTSS was aspirational, not required, and the identification of SLDs was through RtI or Processing Strengths and Weaknesses (PSW). Having been part of evaluations to identify students for SLD through the discrepancy, RtI, and now PSW, I am even more supportive of RtI, warts, and all. As a practitioner, I would argue RtI is more accurate than PSW and research exists to support the claim (Miciak et al., 2018). RtI is not perfect but I would struggle to find evidence that would support PSW over RtI when considering the benefit to the student.
After several years of use with the RtI method of SLD identification, I recently learned there are professionals lobbying for the return to diagnostic methods of identification of SLD. Although I moved out of Wisconsin over a year ago, I was somewhat surprised to hear there would be momentum to move to a less evidence-supported and beneficial method of evaluation for students. I would counter the argument that RtI denies students a comprehensive evaluation is not completely true as evaluators can conduct a thorough evaluation although there is less reliance on cognitive processing. Having recently experienced the PSW method, I would encourage refinement and expansion of RtI for SLD evaluations and MTSS development and deployment in all districts. I would gladly go through the growing pains for RtI once again.
3 primary differences between RtI and PSW evaluations
Intervene before or after the evaluation?
In the PSW assessment model for SLD identification, teams can use a wide array of achievement tests. If a reading delay was suspected, one of the team members administers a standardized achievement test (KTEA-3, WIAT-2) and possibly a narrow-band assessment (e.g. GORT-V). If enough assessments are administered, evaluators can often find an area of weakness even with the best of intentions (confirmation bias?). Even when several assessments are conducted, the window of administration is only a few sessions over a few weeks of time. Although the assessments are reliable and valid, the student’s performance can be altered by environmental factors, motivation, and quite possibly, a global pandemic.
During an RtI evaluation, standardized achievement tests are administered after two interventions or two phases of an intervention are delivered to a student. In this way, the student has been receiving at least 10 weeks of intervention and there is concurrent progress monitoring data. As a school psychologist, I had greater confidence in the achievement test data when I could also reference a progress monitoring graph with 10-12 weeks of data. The longer trail of CBM data led me to have more confidence in the achievement test results when both were similar in outcomes. When the CBM and achievement test outcomes were in conflict, more problem-solving was needed.
During the RtI process of identifying SLDs, teams were required to narrow to 1-2 specific skills and monitor progress. Narrowing in on specific skills often required teams to consider a progression of skills or stages of learning to target skills. Identifying the skill with the best fit for the student also required the use of the problem-solving process including problem analysis. Even if this process occurred imperfectly, discussing hypotheses and reviewing data to focus on 1-2 skills was often more precise and beneficial than the use of standardized achievement tests to develop a hypothesis and qualify for special education.
Services are generally provided only when a student qualifies for SLD in a PSW model. In the RtI evaluation model, a student receives interventions during the evaluation. If a student scores above the cutoff for qualification in the RtI model in Wisconsin, (i.e. in Wisconsin a student needs to score below a standard score of 81 after 2 interventions) but the team cannot continue the intensity of services in a general education setting, the team could identify the student with a disability due to the required intensity of services to maintain performance. In a PSW model, if the student is close but lacks sufficient evidence of strengths and weaknesses, the team could determine that a disability is not present and the student would not be provided intensive intervention.
Reliable AND Valid FREQUENT Monitoring
The SLD Rule in Wisconsin specified that reliable and valid measures were to be used to monitor progress. Reading between the lines, this meant that use of Informal Reading Inventories or running records were out. Curriculum-Based Measures (CBMs) were the only measures that fit the requirements for monitoring within the SLD Rule. Between narrowing on specific skills and monitoring progress using reliable and valid assessments, students were already more likely to receive intensive instruction more accurately aligned with skill deficits. In my experience and my opinion, the requirement of reliable and valid monitoring is one of the linchpins or it could be the linchpin for the development and deployment of academic RtI.
The “rule” also contains language requiring evidence-based or research-based interventions. As an MTSS coordinator, aligning the district interventions to meet the “research” or “evidence” based requirements was more of a challenge. Districts are under local control in Wisconsin and can choose the products and services to provide students. This allows a broad interpretation of “research-based” and districts can select interventions that are loosely based on research but likely lack evidence validation. Quality control is difficult regarding intervention selection and fidelity of delivery can be wide-ranging. When reliable and valid monitoring is conducted, trendlines and student progress are often clear with visual analysis.
As a district coordinator, I did not always have the authority to approve, purchase new, or discontinue interventions that were used or preferred in the district. There were times when student responses to interventions were not what I expected or would have predicted. Interventions with suspect research sometimes resulted in outcomes that had strong effects on our students. On the other hand, interventions with a strong evidence-base provided less positive effects for students. Since we used reliable and valid curriculum-based measures, the trendlines and student performance was transparent to teams, administrators, and me.
Efficient Program Evaluation Evolution
As our screening and monitoring practices were established in our MTSS framework, applying and generalizing the methods was logical to improve efficiency and effectiveness through the system. Rather than monitoring progress for students only being evaluated, we developed a continuum of monitoring based on the intensity of need. All students who demonstrated a need for more intensive instruction were administered CBMs. For students with benchmarks below a level that indicated significant need, weekly monitoring was conducted. For students who were below target but above the intensive intervention thresholds, monitoring was conducted every other week.
Trendlines were monitored by grade level problem-solving teams and benchmark assessments were reviewed three times a year. As the data obtained were similar for students provided interventions, a method emerged to analyze outcomes and conduct program evaluation. It made sense to more widely use CBMs to monitor progress but adapt the frequency of administration based on the severity of need. For students who were significantly below grade level with skill acquisition, CBMs were administered more frequently to monitor progress. For students who were closer to grade-level expectations, less frequent monitoring was conducted (e.g. every other week). After each season, benchmarks were compared between groups, and informal updates of intervention performance were possible. At the end of each year, seasonal benchmarks could be used to compare grade level students to those who received interventions. District leaders, teachers, specialists, and interventionists could review outcomes and make inferences or suggestions on how to improve outcomes in the next school year.
The consistency of measurement led to increased data literacy among district academic coaches and eventually classroom teachers. As assessment data was split by group and performance was summarized by group, stakeholders could examine which groups over and underperformed relative to the grade level. At first, program evaluation was threatening and results could have been viewed as good or bad. However, the data was only an indicator and once specialists saw patterns in outcomes, discussions about what variables to change, increase, or eliminate were the topics of conversation rather than blaming or accusing others. I would support requirements where districts had to disaggregate outcomes and publicly display outcomes by demographic.
Summary
As a school psychologist and district MTSS coordinator who has provided system support and evaluated and identified students with Specific Learning Disabilities in the discrepancy, Response to Intervention, and Processing Strengths and Weaknesses models, I support wider use of the RtI model. Change is difficult and changing the method to identify Specific Learning Disabilities across a state is incredibly difficult. However, when the method includes intervention for students with evaluation, the change and difficulty appear justified. I am not aware of a perfect method of identifying disabilities and those who have served on evaluation teams know that decisions can be difficult and emotional. Using ongoing progress monitoring and achievement test results at the end of two interventions is likely to at least provide students with the necessary instruction and two data points to use when making a very high-stakes decision. The SLD rule served as an efficient start for districts to build defensible procedures to provide intensive intervention to students in need of acceleration.
Part 2- Nothing is perfect, the challenges of implementing RtI for SLD and scaling RtI to MTSS across a system.
Reference
Miciak, J., Taylor, W. P., Stuebing, K. K., & Fletcher, J. M. (2018). Simulation of LD Identification Accuracy Using a Pattern of Processing Strengths and Weaknesses Method With Multiple Measures. Journal of Psychoeducational Assessment, 36(1), 21–33.
Comments