Speaker 1: Hello everyone. Thank you for joining us for today’s webinar, Ensuring Equity & Excellence for Students with Disabilities. We appreciate you taking the time in your busy schedules to participate in the webinar today. We also appreciate the sponsor of today’s webinar, CORE. The Consortium on Reaching Excellent in Education. Before we begin, I’m going to review just a few quick housekeeping items. We welcome your questions and we encourage them throughout the presentation for a Q&A period that we’ll hold at the end. To send a question, use the questions feature in your control panel, type your question into the top box and click send. I’ll receive your question and I’ll put it into the queue to be answered.
Speaker 1: And I apologize, we’re just having a little bit of technical difficulty today. So thank you for bearing with us. So yes, if you have any technical difficulties, as I just said I was experiencing, go ahead and send a question into me and I’ll do my best to resolve the problem for you. We will be sharing a recording of the Webinar with you as well as the slide deck. So keep an eye on your email tomorrow for details on how to access those materials. Now, let’s get started. I’m pleased to introduce today’s speakers, Dr. Michelle Hosp and Dr. Arun Ramanathan. Dr. Michelle Hosp is associate professor of special education at the University of Massachusetts, Amherst. Where her research focuses on reading and database decision making within multi tiered systems of support, and nationally known trainer and speaker around problem solving and the use of progress monitoring data.
Speaker 1: Michelle has served as director of the IOWA Reading Research Center and as a trainer with the National Center on progress monitoring and the National Center on Response to Intervention. And is currently on the Technical Review Committee for the National Center on Intensive Intervention. She has published numerous articles, book chapters and books. Michelle is joined today by Dr. Arun Ramanathan, who is CEO of Pivot Learning, the largest nonprofit provider of technical assistance in the areas of leadership development, teaching and learning, and education finance to school districts in California.
Speaker 1: Prior to joining Pivot, Arun served as the executive director of the Education Trust West and as chief student services officer in the San Diego Unified School District, where he was charged with leading multiple district departments, including special education, mental health, nursing and counseling. Arun just published Opinion Editorials on a range of education topics and has been featured on NPR, local radio and television. He’s also presented and provided keynote speeches and served on expert panels at dozens of meetings across the country.
Speaker 1: We’re excited to have both Michelle and Arun with us today to share their insights. And as you can see here on our agenda, we’ve got lots to cover today starting with a review of the Endrew versus Douglas County School District Supreme Court ruling that came down last year. Common challenges that are faced by students, their parents and school districts and complying with Endrew. And then segwaying into the importance of ambitious IEP goals under Endrew. How do you CB, excuse me, CBM data to write those robust goals and objectives. Strategies for evaluating IEP goals and ensuring they’re rigor and tools for using research based instructional practice is to make sure our students are achieving those goals that we set for them.
Speaker 1: So before we get started, we would like to hear from you, a little bit about the impact that Endrew may or may not have had on your state and district. So we’re going to just launch a quick poll question here for you and give you just a second to put in your answers and we’ll see what impact is being had across the nation. And we’ll give you just a few more seconds. It looks like a few of you are still voting.
Speaker 1: All right. Let’s see what we have here. So it looks like we’ve got about 50% who have not seen an impact at the state level, and about 46% who have not seen an impact at the district level. So leaving us 29% with an impact at the state level and 21% at the district level. So, Arun, I know you’re going to dive a little deeper into Endrew and its impact. So I will go ahead and turn things over to you.
Dr. Arun R.: Thank you. Can you hear me?
Speaker 1: Yes, you sound great.
Dr. Arun R.: All right. So I’ll be quite quick. What I’m going to do is turn this over to Dr. Hosp to talk more in depth about Endrew and how it should be impacting both the IEPs are written but also the way services are provided to students disabilities. What I’m going to do is [inaudible 00:05:09] some context on the front end as to why we are working with you, with researchers like Dr. Hosp, and with districts and schools around the nation to ensure equity and excellence for students with disabilities. Briefly, as Pivot, we are a technical assistance organization that’s based in Oakland, California. We work with a lot of school districts both in California and nationally. And we are merged now with CORE. We did the merger last year in 2017 and now are serving school districts both in terms of providing support at a systems level through Pivot and also down at the classroom and teacher level through CORE, in California and all across the nation.
Dr. Arun R.: As noted in my background, I have a history in special ed, so we’d go to the next line, and I have worked all the way from being a paraprofessional to running a very large in California. And the thing that, if we spent time in special ed, this data is not unfamiliar. There’s always been wide and persistent achievement gaps for students with disabilities. The data is the same in California as it gives nationally. You do see variation from school to school, but the latest data from California, I think is particularly depressing. Only about 14% of students are meeting our exceeding standards on our statewide assessment in ELA and about 11% in math. So we have a long way to go and this sets the context, if you go through the next few slides, the challenge that we all have to, all as educators confront together, educated parents and service providers.
Dr. Arun R.: How do we change this? This is the NAEP data on the fourth grade level. This is the NAEP data at the eighth grade level. Same big gaps, perhaps not as bad as California, but pretty depressing and the same at the 12th grade level. What we’re going to talk about in this webinar, Dr. Hosp is going to talk about how Endrew in many ways could be a game changer in relation to changing this data and how the IEP itself be a tool for making that change happen. So what I’m going to do is turn it over to Dr. Hosp, and she’s going to talk about how to make that happen.
Dr. Michelle H.: Thank you so much. What you can’t see me doing is, I’m shaking my head. I’m like, yes. Yes. It’s the game changer. So, thank you all for joining us and I’m really excited to talk specifically about the opportunities that the supreme court has given us in the ruling on Endrew. Can we go to the next slide please? So what I want to do is just do a quick side by side because up until Endrew, the default was really looking at Rowley and that case was a case actually about a student who was actually high performing. And the parents were actually, the student was progressing year to year and the students were actually insisting that the student was still not meeting, working up to her potential. So they wanted more. And what the Supreme Court said in Rowley is that actually, the IEP just has to set out an educational program that’s reasonably calculated to ensure that the child receives some type of educational benefit.
Dr. Michelle H.: What has happened over the years and lots of cases is that educational benefit has been interpreted as just squeaking by. So merely more than de minimis. Having some type of benefit demonstrating that everyone is doing their due diligence and the student is receiving appropriate services and getting by in oftentimes, with the bare minimum. So then into Endrew, and Endrew actually is a case about a guy who is, he is diagnosed with autism spectrum disorder and he was exhibiting some delays in behavior as well as academics. And what his parents found is year after year, the school district in Colorado was basically reconstituting his IEP but not changing much about the goals and objectives that he was achieving. So the parents became frustrated with this and said, he’s really not receiving educational benefit because, how could he be benefiting if his goals are staying the same year to year to year?
Dr. Michelle H.: So based on that, the supreme court actually redefines what it means to have a really thoughtful IEP for individuals who are being served with disabilities. So, some of the nuances and differences that are here is that the IEP actually requires an educational program, same as Rowley said. That’s reasonably calculated, same as Rowley said, but enable to enable the child to make progress. So progress was not mentioned in Rowley, that is actually appropriate in light of the child’s circumstances. That is also a new addition, and the chance to meet challenging objectives. So there’s a little bit more to what the supreme court envisions the purpose of IEPs for individuals. It can’t just be that the student is receiving some benefit and you can demonstrate that any way you’d like. Now they’re saying that they actually have to make progress in light of their circumstances and the opportunity to meet challenging objectives. So they actually rejected the de minimis standard and said that you actually have to show some gain in order to say that the student is benefiting. Next.
Dr. Michelle H.: Here are some things that relate directly to progress. When we think about progress, the supreme court really thought about that as moving forward. That you actually have to have some demonstrated movement beyond de minimis or that the student can’t be regressing rapidly as might be predicted in some type of progressive lead degenerative condition, like muscular dystrophy. If a student is receiving services and we know that they’re actually having muscle atrophy, are we doing everything we can to support them that it’s not impacting that at a greater rate than we would expect? So there’s a few things that need to be considered when we think about progress. What is the student placement? And this is where Rowley and Endrew really separate themselves. Because remember, Rowley was based on a case of a student who is receiving services in the general education classroom and was making grade level advancements from year to year. Was having passing grades and therefore they said that is a good enough standard and Endrew continues to support that.
Dr. Michelle H.: So if the student is receiving services, and they are fully included in the general education classroom, and they achieve passing marks and are advancing from grade to grade, under Endrew as well as it was under Rowley, they would say then that is more than de minimis. You are showing good progress forward and that would be appropriate. So the differences is that Endrew was a student who wasn’t always fully included. So now we talk about kids who may not be receiving all of their services in the general education setting.
Dr. Michelle H.: So for those students, progress has to be appropriately ambitious. Remember, in light of their circumstances, just as grade to grade is for those students who are fully included in the regular classroom. So some things to consider what that is, the curriculum that’s delivered, how is it delivered and by whom? So the program, member, progress, programs need to be ambitious for the student and they need to contain challenging objectives. And then the child’s circumstances, again, the IEP is supposed to be individualized so that the instruction that we are offering for that specific student is really specially designed to meet that student’s unique needs. Next.
Dr. Michelle H.: So what does this mean for practice? What are some things that IEP teams can take away when they’re starting to think about, okay, we really have to be thoughtful about what are the standards that we are applying to our students? How are we going to monitor that? How are we going to make sure that they have opportunity to grow and that we are actually able to measure that and demonstrate that? Some of the things, because we have to be looking at progress, include ongoing assessments for monitoring students’ progress toward the goals. And it needs to inform instructional decisions. If we’re going to be collecting data for kids who are on IEPs, that data should be informing. It should be formative and informing what we are going to be doing to meet that student’s needs every day in the classroom. So some assessments are better designed to do this than others.
Dr. Michelle H.: And one assessment that has been designed specifically for this purpose is curriculum based measurement or CBM. Over 30 plus years of research demonstrates that the reason why Stan, Dino and Phyllis Merkin at the University of Minnesota first designed this, is because they wanted to make sure that kids who were receiving specially designed instruction, that the teachers could quickly identify whether or not the student was making progress and getting better given the instruction they are receiving. And that they had thoughtful ways to look at that data and make decisions much more quickly than they could with other assessment practices in order to meet kids’ needs.
Dr. Michelle H.: The great thing about CBM measures is that it can be used for multiple purposes. You can actually use data, and we’ll talk about this specifically on how to set really robust IEP goals that meet a higher level standard. You can use the data to actually monitor that student’s progress toward those goals. And you can use that data to reflect and as an educator think, are we doing enough? Is what we are providing the student, getting them to their goal in a fashion and in a most efficient way, or do we need to do something? Do we need to change our instruction or do we actually need to go back and get additional data because we haven’t quite figured out the right instructional practices for this individual student? Next slide.
Dr. Michelle H.: So, one of the things that … One of my former jobs besides being a professor and a director of a reading research center is I also worked at a State Department. And we actually did really thoughtful reviews in monitoring of IEPs in the state because we wanted to see, were the goals being written appropriate? And it also allowed us at a statewide level to think about what are some professional development opportunities and supports that the state could really be thinking about for their districts. So whether you’re at a state level or whether you’re at a district level or whether you’re a teacher in a classroom, how are you guys currently monitoring how IEPs in your classroom or your building or your district, being conducted? Do you have a way to do that?
Speaker 1: I’m going to launch the poll and we’ll see what folks have to say.
Speaker 1: We’ll get just a few more seconds. All right. Let’s see.
Dr. Michelle H.: So others, some have rubrics, some [inaudible 00:18:24]. Yeah. And I was wondering, I wonder, it almost makes me, and I apologize. We should have put up that there is no current way to monitor IEPs. Okay. So that’s helpful to know because I think when we think about, how do we know what we’re doing, is it effective? So let’s go to the next slide.
Dr. Michelle H.: When you think about your practices and really being able to get a handle on, are we doing our due diligence? Are we writing really good IEP goals for all of our kids? Is there a practice that you guys can leverage, that you’re already using to evaluate how you’re doing and writing IEP goals?
Speaker 1: And if folks, if you want to put the answer into the chat box, into your questions box, we’ll take a look at what folks have to say and read it out. So just take a minute to put your thoughts there. Basically, for those of you that answered other, we want to hear how you’re doing it.
Speaker 1: We use a checklist. We have data meetings, teacher feedback. Let’s see, we have a data analytics tool at the district level that provides us with reports and monitoring.
Dr. Michelle H.: Oh, nice.
Speaker 1: It looks like we’ve got a couple more people writing in. Lots more of, we just meet as a group. And another person that looks like they have a system that allows them to do that. Yeah. So it looks like we’ve got a variety of ways.
Dr. Michelle H.: Okay. Because one of the things to think about is, we want to be thoughtful, particularly when you think about the data that was shared at the beginning of this webinar and the percentage of kids in California who are meeting statewide standards in ELA, who have disabilities. That was 14%. And in math it was 11%. I believe. We have a lot of work to do. And so I think really thinking about that IEP is the document, is our contract with kids and families of how we are meeting their needs. So I think this is great to think about other ways that we can be thoughtfully gathering this data and really leveraging some things that you guys are already doing. And then hopefully, by the end you’ll have some ideas of other ways that you might be able to review IEPs. Okay, next.
Dr. Michelle H.: So one of the things that I want to touch on is that, and it’s interesting because the data that was shared with the 14% in English language arts and the 11% in math proficiency, that was a standard. That was a standard. I believe you guys use smarter balance in California. So on smarter balance, that is the state standard. That is the criteria that you’re holding all kids accountable to. So when you look at your kids that are receiving special education services and have IEPs versus those that don’t, when we see those big discrepancies, and again, that is national. That’s not a California issue, that’s an everybody issue. We really need to think about, if we have the standard, they’re supposed to be meaningful. Standards are supposed to give us some idea of what it means for a child to be educated.
Dr. Michelle H.: What are those core skills, those basic levels of proficiency that we want all students to be able to have going through our educational system? It should be no different, as we think about students who are receiving special education services. Because we know the majority of students are in a mild, moderate category and should be able to achieve at those same levels as their grade and age peers. So the standard should be the same. One of the things I like about a standard based IEP approach is that it says, we really expect all kids at certain points in their trajectory through our school system to hit basic levels of proficiency. So, that’s what we mean by a standard based IEP.
Dr. Michelle H.: So the first thing to consider is, what is the grade level content that the student is enrolled in or based on their age? And then we need to also think about, what is happening in the classroom? What type of student data do we need to determine where the student is functioning in relation to those standard? So if we know we have a standard, we need to understand what is the gap for that individual between where they’re currently performing and what that standard is that we hope that they’re able to achieve. Then we also need to develop what is the present level of their academic and functional performance. We need to actually put that down in very specific terms. Next slide.
Dr. Michelle H.: Then we need to look at, how are we going to measure those annual goals that align with that grade level academic content standard? And again, this goes back to Endrew. What are those goals and how are we going to measure that student against that standard, against those goals? Then how are we going to assess step five, and report that students progress throughout the year? Step six talks about how do we identify a specially designed instruction including those accommodations and modifications that are needed to access and progress in the general education curriculum. Then we need to determine what’s the most appropriate assessment option for that student given what their goals are and what the skills that they’re working on. Next slide.
Dr. Michelle H.: One of the things that I want to talk about is what it would look like to have a standard using CBM data. So some of the things that are aligned very tightly and very nicely with using CBM data for goals, for setting your goal, but also monitoring progress toward that goal, is that we know that it is aligned very tightly with what is being taught. So if the student is working on improving in reading, the assessment looks like something that we would expect a typical second, third, fourth, whatever grader to be able to do in reading. Same thing would be true for math or writing.
Dr. Michelle H.: The thing that is really nice about CBM, using CBM data as well, is that it is highly predictive to those high stake outcome assessment. So, that’s the standard. That’s the smarter balance. That is they are, and we’ll talk more about this in a couple of slides, but that is how those benchmarks are determined. What is the level on the assessment that the student has to achieve at, that will predict that they will also achieve at that level on some other outcomes standard or state standards that you have for all kids.
Dr. Michelle H.: The other thing that’s really unique about CBM measures is that it is really extremely sensitive to capturing student’s growth. This is because there are lots of forms that are equivalent. Meaning that the form doesn’t change, whether the student receives assessment one or assessment 20, it is of the same difficulty. So what we’re really capturing is students’ skill and not the air and the assessment. It allows us to really thoughtfully monitor that student’s progress toward our goals. The other thing that is, makes it unique and really wonderful is that it’s super efficient. It only takes about one to five minutes, which really, when I work with teams and teachers and they run through all of the lists of the assessments I give and I ask them, what question does this answer for you or what are you using this data to do to help improve your teaching?
Dr. Michelle H.: And if they can’t answer it, I say don’t test, just teach. Because we know that the best way for educators to spend their time is actually teaching students and not testing them. So CBM measures allow you to spend more time teaching and less time testing. They’re also easy to administer and score and interpret because the data is graft. We have used it for students that actually they can grasp their own data, they can take it home and share it with their parents. So it also becomes a vehicle of really thinking about that collaboration between school to home and sharing student progress. The other thing is that has over 30 years of research behind it, demonstrating it is highly reliable and valid. So as a package of assessment tools, it is one of the best out there to write IEP goals. Next slide.
Dr. Michelle H.: What does that look like to use CBM measures to write an IEP goal? There’s really seven criteria that we would say should be in that. And then again, this could be used in an evaluative way to see our schools really adhering to these things. So of course we have to have the time. What is the timeframe that we’re writing this goal for? Typically, that’s a year, but it might be different depending on the student who is the learner. So who is this a goal being written for? What is the behavior? Meaning what is the specific skill that we want the student to demonstrate, that we’re actually going to be using as our gauge in order to determine whether or not they’re making progress? What is the level? What is the standard that we are putting that kid up against? Is it a second grade standard? Is it a third grade standard? And then what is the content? What is the actual area that we are having the student learn about?
Dr. Michelle H.: Then really describing what is the material, is really helpful so that everyone understands, we know the timeframe, we know the student that this is for, we know what the student’s going to be doing when we collect data, what the standard is, what the content area we’re focusing on, but then what are the materials that we are going to be using to accomplish that? And then what’s the criteria? What is the goal? If we’re going to be doing these things, what is the actual level of expected performance that should include a time of fluency piece, but also an accuracy piece? Those two things have to happen together in order for us to really understand whether the student is benefiting. Next slide.
Dr. Michelle H.: So this is what an example would look like if you were using CBM data to write a goal in early reading or in reading from first grade on up, if you were using oral passage reading, or often referred to as oral reading fluency. And again, I chose reading because we often know that regardless of the 13 disability categories that we can identify students under, that the majority of students have goals in the area of reading. So I tend to focus heavily on that because we know that that tends to be the greatest area of need for our students in special education.
Dr. Michelle H.: So an early reading, it would read something like in one year, Lindsey will produce letter sounds from a kindergarten reading sheet of random letters from letter sounds, CBM progress monitoring material at 35 letter sounds correctly in one minute with greater than 95% accuracy. For oral passage reading, it would sound something like in one year, Jose will read aloud a second grade reading passage from oral passage, reading CBM progress monitoring material with 90 words correctly in one minute with greater than 95% accuracy. So again, you’re getting the level of performance, you’re getting the context of what is the timeframe and you’re also including how accurate does that student need to be in order for us to feel confident that they’ve met their goal and hit some level of proficiency. Next slide.
Dr. Michelle H.: How do we set the goal? And this is often a hot debate topic for people. When I started using CBM assessments over 20 years ago, we were norming our districts. I actually worked in Clark County, Las Vegas and we went around and tested hundreds of kids in reading, in math and spelling and writing, and spent many, many, many hours coming up with local norms. Nowadays, that’s not necessary, but we’ll talk about each of these and when they might be applicable. So the end of the year benchmarks, this is what would be most highly recommended using CBM measures. The reason why is that publishers set those benchmarks based on very specific analyses, called predictive validity analyses. Where they are looking at what is the score on the CBM measure that the student has to obtain in order for us to predict that they will also hit a level of proficiency on some other higher criteria.
Dr. Michelle H.: And it’s often a state level tests like smarter balance or whatever the state might be using. It just so happens that those benchmarks actually have a lot of meaning to them. It’s not just a number that they pulled out of the air, it actually is a score that is supposed to represent student skill. So in order to achieve that score, students have to have certain behaviors that allow them to actually reach that score. And that score becomes meaningful because it actually tells us how likely are they to hit a standard that we would have for all kids based on some level of expectation and a statewide test, again, for all kids. So the benchmarks are typically what we encourage people to use because they are really meaningful and they actually translate to higher standards and an evening of the playing field of, how do we close that gap? If we always have our eye on the prize and we know we have to have students performing at a certain level to close that gap, it makes it much easier to obtain that.
Dr. Michelle H.: The next way to look at setting goals, it would be using norms. This would be recommended only if you don’t have benchmarks available. Or there are really unique populations or subgroups within a region that just are so unique that you really want to have your own norms in order to really set thoughtful goals and monitor kids progress. So of course, thinking is always required. So there’s never one way is right for everybody, but you have to have a really good reason for wanting to use norms over benchmarks if they’re available. Then the other way to look at it is rate of growth. And these are criteria that researchers have put out over the years and they typically are looked at as ambitious growth versus realistic growth.
Dr. Michelle H.: And basically what it is, is it’s telling you what is the increase per week you would expect this student to make on their CBM assessment. Then you just add that up over the number of weeks you’re going to be monitoring progress and set the goal. So it’s based on how they’re currently performing and then you add x number per week that you think they’re going to improve by. You add those numbers up and at the end of your IEP goal, the end of your timeframe, that becomes your standard that you’re shooting for. The problem with that is that, often kids are already behind. And when we just use realistic or ambitious goals, we’re never going to close the gap because the kid is already so far behind. And often people will say, well, we can’t use ambitious because of course the student has a disability.
Dr. Michelle H.: So we have to be realistic about it. But if that is more of a mindset of de minimis, of if we’re really just expecting the student to keep squeaking by, then we’re never going to get them to a level of competence that they can be proficient and on their own. And that should always be the goal for special education is, what is the criteria of when we know the student has achieved enough that they are on a path to greater success along that is much more similar to their peers who are not receiving special education services?
Dr. Michelle H.: The last way is intra individual framework. This is where you actually look at what is the rate of progress the student is currently performing. And then you just use that progress saying, well, this is what they’re currently doing. So we know that this is reasonable for them to continue to do. And you use that to project out into the future for them. The problem, again, it should be pretty obvious, is that students who are receiving special education services are already behind. And if we only allow them to grow at the rate they’re currently growing at, they’re going to stay behind. And in fact they’re going to get farther and farther behind. So the rate of growth and intra individual frameworks are really only recommended for those kids who are high flyers. And those honestly are kids you’re not monitoring progress on anyways. So we really would encourage people to be thoughtful about how you set that goal and to really look closely at what are those benchmarks that publishers recommend as well as are there really unique needs to have some local norms for certain subgroups within your district. Next slide.
Dr. Michelle H.: So this is an example of what CBM data would look like graft. What is really nice about this is that it has lots of information. We have our baseline so we can see clearly. Oftentimes, we collect three separate data points and we take the median to make sure that we are confident that it’s truly representing kids skill. CBM measures are actually getting better and recommending two instead of three. And then we have the number of weeks along the X axis at the bottom on the Y axis from zero to 130. That’s actually our criteria of how many words read correctly. And this would be in relation to a CBM oral passage, or oral reading fluency probe. And then what we’re going to do is we set that goal. Again, that goal is going to be based on either the benchmark or norms or however you determine is most appropriate that the IEP team determines is most appropriate given that student’s specific circumstances. Then you’re going to be plotting that data.
Dr. Michelle H.: The nice thing is, is when you see that student, if you’re collecting data weekly, you quickly will see whether the student is on the trajectory, to be able to reach their goal. It’s when they’re not on the trajectory that we can actually put a cut in the graph and those are our intervention lines. And that means we’re going to do something different. Lots of times I encourage staff to actually write right on the graph what it is that they’re changing. Is it an instructional change? It could be an intensity change that we actually think we matched the student to the right intervention. They just need more of it. So we’re going to double dose or give them more of that intervention. So there’s lots of ways to do an intervention change. And then we can quickly continue to monitor their progress and see if that change made a difference and the student will be able to make additional progress. Next slide.
Dr. Michelle H.: So, important things to really consider when we’re looking at the graphs is that we have to not only look at their rate of improvement, so are the data points matching that goal line? But we also have to look at their accuracy because often one will dip while the other is growing. So for example, often we will see kids slow down but become more accurate as they are obtaining those really core reading skills. That’s a good thing. That’s not a kid who is actually faltering. That is a kid who is learning the skill and becoming more accurate at it. And what they need practice on then is their fluency and becoming more fluent at it.
Dr. Michelle H.: So those two things always have to be looked at together of yes, the student might be increasing in the number of words they’re reading correctly, but if their errors are also increasing, then that’s really not a healthy trajectory of them and it doesn’t really demonstrate their improvement. So we have to really think about is the student really improving and how do we know? What are those things that we can say that the student is doing, that we are confident that what we’re giving them is meeting their needs and they are actually improving. Next slide.
Dr. Michelle H.: One thing to think about is it really matters how often you’re going to collect data. So this is just a quick overview of, in order to make a decision on whether or not a student is benefiting from the instruction that we are delivering for them, we need at least seven to 12 data points in order to make a really solid statistically valid decision. So the data really needs to be collected regularly and frequently in order to make those decisions. The difference is going to be behavior. If we are collecting behavior data typically, and it depends on the behavior, but often behavior data is something that needs a little bit more eye and attention on. So we want to be collecting that data daily if not every hour or so often, in order to really understand whether the student is making improvements in their behavior.
Dr. Michelle H.: Academic data is usually collected around once or twice a week. So we have to be thoughtful about, depending on how frequently we’re going to collect data, the more data we collect and the more frequent we can do it, the sooner we can make decisions about whether that student is actually benefiting and whether the student is demonstrating growth given the intervention and instruction that they’re receiving. So if you look down the chart, if we do it daily, that just means after two weeks we’ll be able to sit back and as a team look at, is this student really benefiting? Have we matched the intervention to what the student’s needs are? If we do it twice a week, then we’re going to be projecting the trajectory on that is out a month. After a month, we should be able to have at least eight data points, excuse me, which will again allow us to think, is the student actually benefiting?
Dr. Michelle H.: If we do it once a week, now we’re looking at about every quarter where we would have about nine data points. And once a quarter, actually that’s only four data points. Actually, we really can’t say much about that, but at least once a year we’d be able to say collecting data four times. This is what we think. But again, it doesn’t allow you to react to students’ needs on a much more frequent, ongoing basis. Next slide.
Dr. Michelle H.: So, what are some characteristics of effective progress monitoring? And again, think about this as if I am going to be reviewing IEPs and trying to determine whether they are thumbs up, thumbs sideways or thumbs down. What are some qualifiers that I should be looking at? So first we want to make sure, are they measuring the behavior that is outlined in the goal? If it is a reading goal, is what they’re measuring reading? Or is it something on a sidebar related to it? Are they using equivalent measures each time? That’s where I talked about CBM measures having a lot of, typically around 20 equivalent forms. So if the measure is stable and we are measuring the student every time against that, then we are capturing students’ skill instead of air within the assessment. So those equivalent measures are really important.
Dr. Michelle H.: We have to make sure that it’s being done regularly with frequent data collection. We have to make sure also that the data is easy to collect and interpret. So if we’re writing these goals and the data’s really complex and it’s hard to interpret, first off, the data’s not going to get collected in a way it needs to, but it’s not going to be used to better inform instruction for that student. We want to make sure that it’s only a short amount of time that we’re taking away from instruction. Again, less time testing, more time teaching. And we also want to make sure that it allows for that analysis of performance over time. Next slide.
Dr. Michelle H.: So the other thing that’s really nice about having that data graft is that we can think about, are there reasons and purposes to collect additional data? So if the student isn’t making progress and they’re not closing that gap, we might have a mismatch between what we’re providing them and what their current needs are. We actually might need to give them more assessment. It might be permanent products. It might be, it doesn’t necessarily have to be a test, but we have to be collecting more data in order to better inform and meet students needs to identify what are those skill deficits that they actually need help in.
Dr. Michelle H.: One way that’s also really nice is if we have progress monitoring graphs for all of our kids. So say we have an intervention group of about five kids that are receiving the same intervention. And if we look at their progress monitoring graphs, and four out of five of them are showing good growth, but one of them is not, the good news is it’s a good intervention. Four of our kids are benefiting. What we need to dig deeper in is what is it about this one student that’s a mismatch? And that’s where we might have to collect more data. We might have to rethink, we haven’t done a good job matching the intervention. Actually, it also may be the intervention might be right for the student.
Dr. Michelle H.: They just need more of it. So again, that dosage, can we give them more intervention time to help increase their skills? It also could be maybe the student is lacking some foundational skill that they need instruction on, that’s not allowing them to benefit from that intervention. So these are all good things to be thinking about of why and when we might need to collect additional assessment data, but you wouldn’t know that unless you were monitoring their progress and really looking at their growth. Next slide.
Dr. Michelle H.: So one of the things that I want to share with you is the St. Croix River Education District in Minnesota, has a really, really nice rubric that they use to evaluate IEPs. What I want to do is highlight what it looks like for their goals and objectives as well as progress monitoring. Because I think this is a tool that could be of interest to you guys when you think about, how would we actually go about reviewing and looking at our IEP goals and objectives? Next slide.
Dr. Michelle H.: This is just an example. So their rubric goes from four to one. You can see down the one side, these are all the criteria that they would be looking for in goals and objectives. Can you go to the next slide? So this is what it looks like under goals and objectives in a little bit easier format to read. They have six areas that they expect to see on those goals. And then again, they rate it from four to one on, does the goal actually contain some of these? So it’s really interesting because you can see some of these reflect a lot of the information that we already went through today. So like number four says, does the goal include the timeframe, the conditions, the behavior, and the criteria of acceptable performance? So it’s really nice to have these things specked out so that you have a tool to evaluate them against. Next slide.
Dr. Michelle H.: So this is an example of their rubric sheet for progress monitoring. Can you go to the next slide please? And this is an example of the criteria that they would use for progress monitoring. Some of this talks about, is it graft? Is the graph actually, does it have a descriptive title? Are the axis labeled? Is it consistent? Is there a trend line? Is it parent friendly? So all of these things are really nice criteria that we would want to use to think about how we would determine whether we are doing our due diligence and writing really good IEP goals. Next.
Dr. Michelle H.: So having some type of system really allows us to, we can even rate and rank how our goals are written. It also allows us to consider, we can look at kids individually and see, are their goals increasing from year to year? We don’t want to get stuck in that loop of reconstituting the same goals for kids. That was the whole court case with Endrew. We have to be really mindful that we are breaking that cycle and one way to do that as have a system in place to actually monitor those goals. Next slide.
Dr. Michelle H.: We also have to think about if the goals aren’t written in a fashion that we would consider acceptable, it’s really just a symptom of a bigger problem. There is no educator out there that says, I’m going to set out to write bad IEP goals. It just doesn’t work that way. So we have to think about what are some of those reasons why. So it actually could be something about … And these are just some ideas. There’s lots of reasons why, but assessment knowledge, they don’t really know what data to collect, how to use the data, instruction knowledge. They may not know how to identify what the student’s needs are. They might not know how to identify good interventions. They might not know how to implement them with good fidelity.
Dr. Michelle H.: It actually might be a student behavior. What if the student doesn’t show up? They’re never in school. That’s a whole different issue on what’s happening, whether they’re achieving their goals. Or what if the student is receiving instruction, but every time they are in that classroom, their behavior gets disruptive to the point where they are asked to be removed. So the student has never actually getting the benefit of an instruction. Sometimes it’s teacher behaviors, they’re just not sure, they need some training. Next slide.
Dr. Michelle H.: So, some of the things that I want to point out to you guys that are so helpful that CORE actually has there are, we talked about, there are often times when you need to go collect additional assessment data. CORE has a fabulous book that’s called Assessing Reading Multiple Measures. I use it, I recommend it to lots of people. And it has lots of different assessments over lots of different skills that you should be thinking about to help identify, to monitor kids progress, to help identify what are the specific skills kids are missing. Next slide.
Dr. Michelle H.: Well, we know just collecting the data is not enough. So the other resource that CORE has done a really nice job of is they have a teaching reading resource book and they actually have very usable, helpful strategies and instruction for all of those different areas in reading that students would need assistance with. Next slide. So what does this all mean? So really when we think about, if we have better ways of writing IEP goals and objectives, and we have a very clear way to evaluate and monitor that, it allows us to think about what needs to happen at a state level. We use this at a state level and IOWA actually to help inform our guidance. We actually put policy in place of what schools had to do. And the state actually put together professional development and training for school so that they could actually write better IEP goals.
Dr. Michelle H.: For districts, they can monitor that and identify what are those areas of concern because that is what students, I’m sorry, that is what teachers are going to need support on, which really lends itself to professional development opportunities, also some policies, some procedures schools might implement. And then for teachers having clear IEP goals to monitor students’ progress fundamentally goes to the charge of the job of how do we better meet kids needs to close that gap and give them the best instruction possible. For parents and advocates, we know that having clear goals and objectives, it gives that playing field of collaboration. Where we are collectively looking at data together, we understand it, it’s very clear. Everyone is informed and it’s easy to share. So it really helps with all of those different levels. Next slide.
Speaker 1: Well, thank you Michelle and Arun for sharing all this information and providing us with this super deep dive into IEPs, and the strategies to help us improve the outcomes for our students with disabilities. Before we get to the questions, and we do have some that have come in. I am going to just quickly thank our sponsor CORE. CORE works closely with Pivot Learning, as Arun mentioned at the beginning of the Webinar, to help districts and schools to evaluate existing general education and special education systems and develop action plans to implement processes and practices that result in sustainable academic excellence for all students, including English learners and students with disabilities. You can learn a lot more about CORE services and that links also to the Pivot Learning site. Corelearn.com.
Speaker 1: As a thank you for joining us today, CORE is offering a 20% discount on the purchase of those two books that Michelle just mentioned, Assessing Reading Multiple Measures and the Teaching Reading Sourcebook. This offer is available through December 15th. All you have to do is go to the CORE store @corelearn.com, purchase the CORE Sourcebook package. It’s right there at the top of the store. That package includes both of the books. And then when you get to check out, you’ll enter the code webinar 1018, and your discount will be applied. If you prefer to pay by a PO, that’s just fine. And just make sure to put that same code Webinar 1018 on your PO when you send that in. And this discount applies to as many sets of the books as you’d like to purchase. So it’s a great time to get these resources into the hands of all of your teachers. So I encourage you to take a look at them, those materials and see about purchasing them here before that discount ends on December 15th.
Speaker 1: We do have just a few minutes to take some questions. So, we’ll go ahead and start here at the top. So Michelle, have the ABCs of CBM been revised?
Dr. Michelle H.: Yes. It’s actually in its second edition. So it was revised. Let me see what that publication date is. It was revised for the second edition. And that is, you think I would know that, that is 2016. So two years ago.
Speaker 1: Okay, great. Let’s see. Are there any recommendations about which CBM should be used or specific CBM programs that are research based?
Dr. Michelle H.: Actually, that is a great question. What I would encourage people to do, if you go to the national center for intensive intervention and go to their tool charts, if you are looking for progress monitoring tools in math, or behavior, or reading, they have done a thorough review of each of those tools. So you would get to see, and in order for companies to submit their tool, they have to submit all of their research and all of their evidence and then we review it based on that. So that is a really good place to go, to look at what’s available out there and what the research is to support it.
Speaker 1: Great. That’s a great recommendation. Do you have, speaking of recommendations, do you have any for training teachers on how to develop the interventions that are going to really drive towards achieving those IEP goals? Aligning instruction to the IEP?
Dr. Michelle H.: Yeah. That’s the million dollar question. I think the CORE resource book is a great one. That’s a really hard question to answer. I think it depends on what the student needs. It depends on the skillset of the teacher, of what is the best match. Some teachers need a lot more support in providing interventions and other teachers can basically do it on the fly because they’re just that good. So again, I would recommend that people use, those great websites out there that do reviews of interventions. What works clearing house and other places that really allow you to look at, and I actually think the National Center for Intensive Intervention is reviewing interventions as well. I would look at what has been vetted, and has a demonstrated, that it actually improves kids’ learning.
Speaker 1: Great. And yeah, that is the million dollar question as you said. And CORE does provide the coaching and the training around that, so that’s an option as well. So again, feel free to go out to the corelearn.com website, and you can explore the services that CORE offers around just that issue. And you can also feel free to reach out to CORE and the phone number’s there on the screen and we’ll also be including that information in the followup email that goes out. And unfortunately, we are at the end of our time. Michelle, you packed such a great amount of information and to just a little bit of time. So we really appreciate that. And Arun, we appreciate your joining us to help frame this important topic for everyone. As a reminder, you will get an email tomorrow with a link to the recording and to the slide deck.
Speaker 1: If for some reason you don’t get that email because it ends up in your junk box, feel free to visit the CORE website, and we’ll be posting those materials there as well. And also mark your calendar for December 5th, we’ll be hosting another webinar featuring Dr. David Kilpatrick and he’s going to share some of the latest research on effective word level reading interventions. Registration is open for that and is also, they are on the CORE website. So definitely, take a moment to visit corelearn.com because there’s a lot of great stuff out there for you. Thanks again everyone for joining us today and alas. Thank you to Arun and Michelle, we really appreciate your sharing all of your insights with us.
Speaker 1: Have a great afternoon or evening everyone depending on where you’re at.
Dr. Michelle H.: Thank you.
Dr. Arun R.: Thank you.