Literacy Talks

Episode 117: What Your Reading Data Is Really Telling You

Reading Horizons Episode 117

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 53:52

What if the data you’re collecting could actually make your teaching easier—not more overwhelming? In this episode, we unpack what reading assessment is really telling you and how to use it with clarity and purpose.

In this episode, we discuss:

  • The true purpose of different types of reading assessments
  • How to move from data collection to meaningful instructional decisions

Assessment can feel like one more thing on a teacher’s already full plate—but when used well, it becomes one of the most powerful tools for improving student outcomes. In this conversation, we explore common misconceptions about assessment, including the belief that one test can tell the whole story, and why understanding the why behind assessment matters just as much as the data itself.

Our guests share practical insights on how assessment should function within a larger system—supporting both classroom instruction and schoolwide decision-making. From listening to students read aloud to interpreting screening and diagnostic data, this episode highlights what it looks like when assessment and instruction truly work together.

Guests:

  • Andrea Setmeyer, School Psychologist & Professional Learning Director (and now Vice President of Marketing), The Reading League
  • Elisabeth Lamoureaux, Professional Learning Director (Leadership & Implementation), The Reading League

Show Notes
 Learn more about The Reading League Summit and resources on assessment:

The Reading League Summit

💬 Want more insights like this?
Subscribe to the Literacy Talks Podcast Digest for episode recaps, resources, and teaching takeaways delivered straight to your inbox!

Do you teach Structured Literacy in a K–3 setting?
Sign up for a free license of Reading Horizons Discovery® LIVE and start teaching right away—no setup, no hassle. Sign-up Now.

Coming Soon: Reading Horizons Ascend™
From Pre-K readiness to advanced fluency, Ascend™ offers a consistent, needs-based reading experience across every grade, tier, and model—so every student can build mastery, one skill at a time. Learn More.

Unknown:

Welcome to Literacy Talks, the podcast for literacy leaders and champions everywhere, brought to you by Reading Horizons. Literacy Talks is the place to discover new ideas, trends, insights and practical strategies for helping all learners reach reading proficiency. Our hosts are Stacy Hurst, a professor at Southern Utah University and Chief Academic Advisor for Reading Horizons. Donell Pons, a recognized expert and advocate in literacy, dyslexia and special education, and Lindsay Kemeny, an elementary classroom teacher, author and speaker. Now let's talk literacy.

Stacy Hurst:

Welcome to this episode of Literacy Talks. I'm Stacy Hurst, and I am joined today by Donell Lindsay is at a professional development opportunity for her district. Hello, Donell, and we have two really fun and interesting guests with us today. We have Andrea stepmeyer, who is not a stranger to our podcast. So welcome back, Andrea. And we also have Elizabeth Lamoureux, and they are representing the reading league today. So I guess we'll start out for our listeners to if each of you wouldn't mind just giving us a brief background. How did you get to where you are today? Should we go in alphabetical order?

Unknown:

Andrea, sure. Thanks. It's so good to be back here with you all. I always enjoy these conversations, and I come to the reading link by way of school psychology. I was a school psychologist for many years, and that influences a lot of how I think about literacy and literacy instruction. But specifically, the connection to assessment that we're going to spend a lot today is that I didn't realize that educators didn't have the same training on assessment that I did as a school psychologist. I mean, there were some tests that I knew were specific for school psychs, but just that general knowledge of the purposes of assessment and why we do things and why assessment is really the other side of instruction, and they both go together. There were a lot of differences in our training. And so one, just quick example that highlights this, I think, is my first job as a school psychologist. I was also the MTSS coordinator for the building, and I kept having teachers refer students to me for a special education evaluation, and I would say things like, where's their progress monitoring data? And they didn't have any. And I said, Okay, well, I'm gonna really put my foot down, and you can't refer a student without progress monitoring data. And the next day, a teacher came to me with a graph that went from A to B to C to D to E, and they tried to graph those leveled reading levels in a straight line and just showed where the student's level was. And the disconnect there, I thought, oh my goodness, we have so much work to do to really understand what type of progress monitoring data we're talking about. So those those lessons stay with me, right? That teacher was working so hard to meet her students needs in the system was letting her down. And so really thinking carefully about how we help teachers understand assessment is

Stacy Hurst:

important to me. I love that. And you use the word system that I'm sure will come up multiple times in our conversation today. And what thanks for that very real example I was relating to both you and the teacher in that scenario. So we do we learn more, we do better. Okay, Elizabeth, what is your background? Yeah.

Unknown:

Thanks. So previously, I was I started as a paraprofessional, and then became a classroom teacher and a reading interventionist, and then shifted into some leadership work as an intervention coordinator for the district, and then as a co principal and an assistant principal. So I've had worn a lot of hats in education, and I think thinking about assessment and literacy has always, always been a passion of mine. But one of the things that I've really noticed that has brought me to the reading League and also to the work that I do every day is around professional learning, and I think the work, the work's beginning to shift, and the work that I find to be really exciting is kind of breaking down these silos that have happened right the one and done professional learning, and starting to build these bridges where we're shifting into implementation practices and taking the knowledge that we have and supporting educators and what this looks like in their classroom. So I think that stands true for both assessment as Andrea was talking about and progress monitoring and using that to really support student outcomes. Outcomes, but also with professional learning topics in general,

Stacy Hurst:

great, and that is directly related to your current role at the in the reading League, right? Do you have a an official title? What is that I had it written down?

Unknown:

Yeah, I'm one of the professional learning directors. There are a couple of us, and then within that, each one of us has a focus area, and so my focus area is on leadership and

Stacy Hurst:

implementation, great, but very important areas to focus on. So I was thinking as you were describing your background, that you could draw a graph similar to the one Andre mentioned of your role, starting at paraprofessional to principal, and now your work at the reading league. I think that is fantastic, and how you said that literacy has always kind of been at the forefront. And as you were saying that, I was thinking, how cool would it be to have a principal or an assistant principal who's whose passion and focus was on literacy? How did you come to have that focus?

Unknown:

You know, that's, I think that's a great question. I one of my first jobs, my first teaching jobs, actually, I took as a paraprofessional, because back then, teaching jobs were really hard to come by, right? It was my it was my foot in the door. So I took it at a high needs Turnaround School, and within the same year, I was given an opportunity as a long term sub, right, and then as a teacher. But when I took the classroom teacher position in fifth grade, so many of the students were non readers, and I didn't know. I was not prepared for how to teach them. I did not know. I did not have any strategies. And at that school, there was a wonderful reading coach who supported me with a lot in a lot of different ways, as my therapist, right side by side, coaching me how to learn the curriculum and whatnot. But then she also introduced a lot of the teachers at the school to a program, a master's program that we all enrolled in to help us learn about evidence line practices, and that kind of

Stacy Hurst:

opened the door for me. Great. How fortuitous. And I know Donell has some things in common with that. Donell taught high school, and I don't want to put words in your mouth, Donell, but you've also realized something about the reading level of your students.

Donell Pons:

Yeah, absolutely. Very similar feelings, and it made me think, Andrea, the next question would be around, what are some of the biggest challenges you see to schools being able to implement and we've heard firsthand, Elizabeth, you talking about walking into a situation which sounds so familiar to many of us. What are some of your thoughts over the years?

Unknown:

Yeah, I think we ask a lot of our teachers, and so I've been in systems where we really ask teachers to administer the assessments, gather the data, put it into a spreadsheet, interpret the data, make instructional change without really having adequate time to do any of those pieces well. And so I think that's where I see the most struggle, is when a system is really expecting maybe too much of our of a given teacher, depending on how much time they have, etc, etc. And so thinking about equipping teachers with the right support, with the right system in place, I also see schools really struggling, trying to incorporate all of the required tests, and then teachers saying, well, those aren't actually helpful for my day to day planning, so I also need to do these pieces and so just a real lack of clarity around what the purpose is, and knowledge of All of the different types of assessments that we're asking students to take within a given week, I would say those are the top two things I see schools struggling with

Stacy Hurst:

which kind of helps. Oh, sorry, go ahead. Donell, well,

Donell Pons:

no, I bet we were both thinking the same thing, Stacy, but it's really interesting just hearing Elizabeth and and then you kind of bringing this up again, saying, you know, we're asking a lot of our teachers and just the time in it, but then it's also that understanding of what it is for, right? What we do with it, the type of training that requires, and Stacy, we've had this conversation many times, the limited hours you have at the university setting to even provide that training.

Stacy Hurst:

Yeah. So I think that the fact that the reading League has such a great resource for implementation and training is a good thing for schools and districts to know. If you're listening, reach out, use them. They're highly knowledgeable, and as you were talking Andrea that we're also here to talk about the reading league Summit. So maybe before I ask my next question, talk a little bit about the summit. Can you tell us some details, dates, location, I'd love to.

Unknown:

I'd love to. So the reading league summit really is a unique experience. We host our really six. Successful national conference every fall, and that feels like a choose your own adventure conference, right where you come in and you have a keynote, but then you get to pick which sessions or strands or topics are really of interest to you, and you come away with learning that is really tailored to what you're interested in. And it's really fabulous. And I love our conference, and it's two and a half days, and it's it's big, and there's lots of really great communities and people to connect with. So that's awesome, but the summit was really created for a specific purpose, which was to to highlight a need in the field. And so we started with bringing together the communities that were advocating for the science of reading, and then we're advocating for good instruction for English learners and emergent bilinguals. And we had the same goal, but we were advocating in different ways, and sometimes it sounded like for different things. And so we brought everyone together and said, the students are more important than these disagreements. How can we talk to each other? Where can we find common ground? And I bet we agree on more things than we disagree on. So let's focus on that and start to move forward together on for the sake of the students, it was highly successful. It was a really kind of groundbreaking moment within the science of reading field. And so we did focus on that topic with those communities for two years, and then opened it up to what other needs does the field have in terms of conversation, and so we picked assessment this year. Um, and maybe I'm getting ahead of myself, but the the questions you asked, What? Where we're going to have it is Syracuse, New York, at the beginning of May, May, 5 and sixth. It's a two day experience, but they're a little bit shorter day so it starts kind of midway through day one, and we wrap up pretty early on day two, so folks can get home. But it's unique in that we're all in the same room together, learning together. From these assessment experts, you'll hear a lot of conversation and dialog, in addition to some presentations, we'll have data workshops for you to really apply what you're learning to some real literacy data. And so it just is a unique it's a unique, fun, collaborative, learning experience.

Stacy Hurst:

Yeah, and Donell and I have both attended summits, and I love that. I'm going this year. I'm so excited for that. And for those of you who are interested, the schedule, the agenda for the summit has already been released, and we're going to talk about some of those sessions today. And as Andrea mentioned, assessment is what the focus is so important for many reasons. And it sounds like you chose that based on all of your interactions with educators and just understanding systems and what was maybe lacking in a system. Anything to add to that, I

Unknown:

just think that we've come a long way in terms of knowledge, about reading development, knowledge about reading instruction. I'm not saying we're done. We have work to do, but we've come a long way as a field. I now get teachers on the street right stopping and talking to me and saying, Oh, do you know Linnea airy and tell me about orthographic mapping, and like there's a sophistication in a lot of the language that we can use to talk about reading that has evolved in the last 10 years. I think we still have some work to do in assessment. I don't find that same sophistication yet in thinking about assessment and how it matches what we know and what we've learned about the science of reading, but there's a whole science of assessment as well that teachers need to be familiar with. And so that's why we chose to lift it up, because it will be the the shining light that guides the way for our instruction, if we can fine tune it based on real time data of where our students are. So we have to pay attention to that too. That's awesome.

Stacy Hurst:

And Elizabeth, what kind of assessment interactions have you noticed in your work with professional learning?

Unknown:

I think there really is a lack of clarity around the purpose of assessment, right? And a lack of systems for assessment too. I'm finding that leaders can almost get like hoodwinked by the developers, right? And they there's a belief that one assessment can do it all, and that more assessments is better than one or less, more assessments is better than less, right? And so it's kind of helping them navigate the right the Goldilocks principle, the just right amount and being really strategic with the time that we're spending assessing, because that's time away from learning. But navigating all of that takes some strategy and some time and a thought partner, and we can't do it alone, but we also have to be really clear with our teachers, too, about what the plan is for assessment to support them, and understanding what we have and why we have it, and how we can use it to inform our decision making.

Stacy Hurst:

That is so important. And I noticed that the keynote for this summit is about. By Christopher Schott Schneider, and it's titled, reading assessment. Why we do the things we do? So I imagine that will provide some level setting for us all, and kind of I think we're all relating to what you just said, too, Elizabeth, at some time or another, we've felt confusion around those assessments that we're taking too much time to assess. Where does it fit into teaching? There is so much So to that point, assessment can sometimes seem really overwhelming for teachers. What are some of the biggest misconceptions would either both of you say that educators have about reading assessment? You mentioned the purpose, not knowing the purpose of it. Is there anything to add to that?

Unknown:

I would just add that a lot of times make me think that one assessment can tell the whole story, right? And there's really no single measure that gives you everything you need to know about a student's reading development. So I think that that is a little bit of a shift in thinking and can sometimes be hard to digest.

Stacy Hurst:

Yeah, that's

Unknown:

important, and I would just add to it some of what Dr Schott Snyder is going to talk about in his keynote is why certain tools have been developed in the way that they are. I think teachers sometimes struggle to understand what's the importance of validity and reliability. And some of these psychometrics that that we use that then constrain how we can give a test, if it's standardized, and I have to give it in this in this measure, in a certain way, or I have to time it and stop the student reading at a certain point. It's unintuitive for a teacher who really wants to spend more time giving corrections to those errors that they just heard or asking yes comprehension questions afterwards, and so that's really hard, and it doesn't feel like what I'm trained to do as a teacher, but I don't think that we give the gift to teachers of why those things are important. Why does it matter? Well, it matters so that I can compare this student's progress or the student's performance to other similar students. Like that answers a different question. It might not be what to do next, which is what the teacher often wants to know, but it answers a different really important question that I might need to identify, how intense is the need for this intervention, or Are we closing the gap and how quickly, and so I think giving the gift of that, why behind it, which is part of what Dr Schott Snyder will do, and why we have so much research and constraints on some types of reading assessment that it actually serves a different, really important purpose?

Stacy Hurst:

Yeah, I was really relating to what you were saying in my role as a pre service teacher, a professor of pre service teachers, when we give assessment, I have learned, I learned very quickly that I have to clarify, this is not a teaching opportunity. We're not using scaffolds during this particular assessment, and this is why, and I think that's really important information. So I'm really glad you brought that up and that he will be addressing that. It'll be great. I look forward to that. So to make it maybe a little less overwhelming, and I know we cannot solve all the problems today or even in a short answer, but when assessment is working well, and let's just keep it on the classroom level for right now, what does that actually look like? Can you give some examples, or how would you address that?

Unknown:

I love that you're asking this question, because I do think we have a tendency to talk about assessment and data in these super vague language like database decision making. But what does that really mean? What does that look like? And so one thing I want to highlight, I guess, is that it looks different at different grade levels, that because we're developing reading and writing skills when students enter school, typically in preschool or kindergarten, and that those skills evolve and change and grow over time. Assessment also changes and grows over time. So it's going to match those skills that you're working on, which means, like in a kindergarten classroom, you're looking at really granular, often, um, very fine point skills like letter naming and letter sound recognition. You're looking at vocabulary and oral language separately. You have to kind of get into the weeds on all of those skills. And so I guess one answer to that question would be that that teachers are using what we know about students individual skills, especially in those early grades, k1, two, to form small groups, that my small groups are based on students need for specific phonics instruction or language and vocabulary development, and that we're really thinking critically about What we know in those individual skills to plan intervention.

Donell Pons:

This is really interesting, because I think it's something that needs clarification. At this particular point, we've been talking about different ages of students, so we've got to be thinking about time and place and the grade level of the student, and what it is the information that we're gathering and what we're. Be doing with that information, because I would often run into a situation, say, in middle and high school with a student who had been receiving services, scaffolds and supports in order to meet grade level curriculum in reading and writing, and they would go into a test that wasn't an assessment of their ability to do that thing, but it was their ability at grade level to perform. And then they would say, well, they're not allowed. Their supports that, if you read the fine print on the front of the test, depending on what kind of test it was, they were allowed, and teachers were some teachers were shocked, because they had been denying those to their students, saying, Oh no, we need to see what you can really do on this test. Well, that's something different. You'd have a conversation with a teacher to say, what is it we're trying to measure? And the suddenly, when the realization, because the teacher had never had a conversation like this before. The eyes would get wide, and some of the teachers were horrified. I did not know for years, nobody's had this conversation with me. You hate to see that happen, right for a teacher. Donell, I love that you're

Unknown:

lifting that up. You're exactly right. What's the purpose of this test? Do you want to know how well the student can decode or understand the language, or can demonstrate their understanding of a topic in social studies or science, right? Those are three different purposes, but you're, you might be measuring all of it if you just plop that paper down in front of a student who still, who's still striving to read, yeah.

Stacy Hurst:

Are we comparing how they're doing to their peers? Which is another kind of assessment, right? That's right. And just to also mention here, I think it's worth mentioning the reading league journal, their most recent edition, which is the January, February edition, is all about assessment. And as we were talking, you're bringing to mind Dr Patrick Kennedy's article. All of them are good, by the way, but he has a very good way of making terms like validity and reliability very palatable for people. And so if you are questioning what importance those two terms play in assessment, or even what they are, if we need a little retrieval practice, that's a good place to go back and refresh our memory on that Stacy.

Donell Pons:

He was really good about telling us all that goes into the back end. I mean, obviously he didn't. He did not bog us down with a lot of the details, but he was very kindly, sort of gave us a peek into how much goes into the back end, because you're measuring a vast number of students with very different backgrounds, and it's trying to do that correctly, right and honoring the fact that those students have different backgrounds. And I thought of you, Elizabeth, as you talked about assessment too, that sometimes when you're in a classroom as an educator, I remember taking a whole group of my middle grade, middle students down to take an assessment, their seventh graders. And I read some of the it was a writing prompt. And I read the writing prompt, it had nothing to do with any of their lives. It was so foreign. And I remember standing there going, what are we going to do with this prompt? And of course, you know, those students at the top of the class, there were quite a few of them dutifully sat with it for a minute and thought, Okay, here's here's a way I can approach it. But most of my students sat there and struggled. And so it was a quick conversation about, sometimes a prompt speaks to you, sometimes a prompt does not speak to you, and how do you approach that? But that, for me as a as a teacher, was a real moment to say, Wow, the assessment really matters, and how they put that together for the group of students. That's important.

Stacy Hurst:

And then taking that even further to say, how do you act on that data? It's important, yeah. How are you going to use that if you are, yeah, one thing that stands out in the agenda is the emphasis on taking action on multiple levels. And so Elizabeth, this summit includes a panel on and this is the title, taking action on classroom data, which is very clear with Matt burns, Tiffany Hogan and Sarah Siegel. What kind of conversations are you hoping that educators will hear in that session?

Donell Pons:

It's a great line

Unknown:

that's great. Yeah, I'm hoping that, you know, there's a better understanding around how to interpret our screening data, right, and what decisions we can make from that. Because, again, this comes down. I feel like we're a broken record a little bit, but purpose thinking about how we can use our screening data to make decisions, versus how we can use some of that diagnostic data to make decisions. When are we planning small group instruction versus maybe when we need a class wide intervention? And how can we be more strategic with with our time and our energy and to increase our student learning goals?

Stacy Hurst:

Well, definitely great outcomes. So those of us who will be there, I'm looking forward to that session, and Donell mentioned great lineup. I'm not as familiar with Sarah Siegel, and I'm looking forward to hearing from her, but I know Matt burns and Tiffany Hogan can so translate all. Of this into actionable takeaways for teachers.

Unknown:

Stacy, I'm curious if you're familiar with Dr Carol Connor's work, oh

Stacy Hurst:

yes, of course, has

Unknown:

been really bringing our research to life. And so thinking about how we're meeting language comprehension needs in addition to their word recognition needs, providing adequate time to develop both both of those types of skills is what she'll bring to the conversation. And then I also just want to acknowledge that Margaret Goldberg will be the moderator for this conversation. So panels is moderated by somebody who brings like an authentic voice to the role. Margaret Goldberg, for those of you don't know, is a literacy coach in a school district in California, and so she's having these conversations with teachers about how to use their classroom data every day, and there will be an opportunity for her to interact with these panelists, so that we don't get just caught up in their research, but always ground it in bringing it back to practice. What does that decision look like? What do I need to know to move forward with this student? I love

Stacy Hurst:

that, and we have had Margaret as a guest before on our podcast, so we recommend revisiting that episode if you haven't. And also, we had another episode that had FOMO in the title because Donell went to the summit and I didn't and wanted to be there, so you could revisit that one for previous summit information. But I love, I love the takeaways that you're talking about from this particular session, I think that will be so fantastic.

Unknown:

For over 40 years, Reading Horizons has helped educators build strong literacy foundations for students. Now with ascend, they're supporting every learner and every tier through one unified solution, ASCEND mastery, a comprehensive pre K through five core literacy program. And ascend focus, an adaptive K through 12 intervention. Learn more and explore how you can bring ascend to your schools at reading horizons.com/ascend, implementations begin in the 2026, school year.

Stacy Hurst:

Andrea, what does taking action on classroom data actually look like for a teacher in real life? Yeah, I think

Unknown:

it's having some confidence in interpreting your beginning of your screening data. We're seeing that as a requirement in more and more schools. So thinking about what does that mean in terms of my students reading instruction language needs being able to put that together in context. It also means making some whole group intervention decisions. So Matt Burns is going to talk to us about class wide intervention as a concept. You know, you have students coming into your classroom where the majority of them have skill gaps, the most efficient way to meet them might be to do a class wide intervention. So we're going to talk about looking at that first before diving into small group needs. But then we are going to spend some time talking about small group needs, and Dr Hogan is going to talk a little bit about language needs and language comprehension needs, even in those early grades, in addition to foundational literacy skills, or those word recognition skills, and then Sarah Siegel is going to talk to us about planning for both independent work and teacher managed work that really meets your students needs most effectively.

Stacy Hurst:

And it sounds like that will really help simplify some of the confusion for for teachers and administrators, a lot of teachers feel like they're collecting tons of data. Some of it is, I will use air quotes imposed upon them, or they see it that way, and some of it is data that they're gathering. But how can we help teachers move from collecting all that data or even noticing all the data they are collecting to actually using it to guide their instruction,

Unknown:

I would say, better understanding the why, right, the purpose behind them, and then also thinking about having some time and support and structures in place to really support the data. Data protocols are really helpful. When you're thinking about taking that data and trying to make it actionable, can be really overwhelming. So having a protocol in place for different data types is one, one great, one great step I agree. And I think having a coach or somebody that can sit next to you and look at the data really builds confidence, right? So you just asked me what, what taking action on classroom data looks like, and I started talking about screening, and then I think I got off track, because what I really want to come back to is those instructional decisions. So if there's a coach that's sitting with teachers looking at the data together. They can look at those spelling patterns in a writing sample. For example, are you noticing this? I'm making a connection to the phonics skill you just taught, and it looks like these students got it and these students didn't. When I'm looking at these writing samples or thinking about looking at progress monitoring data for an intervention, okay, you've been. Working with a student for several weeks on this skill. Let's look at their progress monitoring data together, and I want teachers to be really confident in answering those types of questions and making those decisions. And so how to help teachers move from collecting data to use it, I think really relies on coaching on somebody in the building, and it could be a school psychologist or a principal or some speech language pathologist in the building, but somebody that can come alongside of you and really look at your student data in real time and help you analyze it and make instructional moves.

Stacy Hurst:

Yeah, that's really good. Yeah, what

Donell Pons:

I'm hearing is the environment, right? We're talking about a whole ecosystem that would be helpful and supportive in how we look at and use our assessment data, and it's and maybe, perhaps, what we're actually doing is now having a full, well rounded ecosystem. If maybe we're really good already at the implementation, or we think we are, we're really worked hard at that. We've done a lot of training, we feel really good about the program we have in place. That maybe the assessment is lacking now is our opportunity to really have the full complement of what it would take. It's interesting, yeah.

Stacy Hurst:

And as you both were talking, I was thinking the student level focus that you both had in answering those questions which are which is important, because we need to meet the needs of individual students. And I did notice on the summit agenda that there is a panel that's focused on student level data with experts like Jessica toss and Mark Shin as well. So Andrea, when teachers look at that individual student data, and this is you having an opportunity to coach anybody who's listening as a school psychologist, what are some of the most important questions they should be asking when they're looking at that data?

Unknown:

So I think first, how is this student similar to or different from other students in my classroom, but also other students that this assessment might have been normed on or created with, and so just thinking about that similarity and differences in terms of languages of instruction, languages that they might speak their instructional history all play a role in how they're going to perform on a specific assessment, even on a particular day. And so keeping that in mind when you're interpreting data that this number is not set in stone. This number represents an entire background, an entire person, and this was their skill or performance on a specific day. I think is important to think about and ask. I also think that when we think about individual student data, sometimes we want to talk a lot about cognitive skills in general, like, what's their working memory and what's their capacity for these pieces, but really the most important thing to narrow in on are those most essential questions are, what are the specific literacy skill gaps? And what are they showing mastery on? And how far back do I need to go to ensure that they have that strong foundation and then build your instruction up from there, those are actually much more predictive of reading success than knowing an IQ score. For example, we're not good at using that to match instruction, but knowing those really specific granular skills at the individual student level, and that's usually we call that diagnostic assessment, that's really, really powerful for helping a student move forward.

Stacy Hurst:

Yeah, and a teacher know what to instruct for that student, so maybe this is the sim a similar question. But Donell, you already mentioned systems. And Elizabeth, how do you think strong assessment systems help teachers make those better decisions or make better decisions regarding instruction and intervention?

Unknown:

Yeah, I think the assessments, having a strong assessment system is like the foundation right for our schools, the foundation for MTSS, they provide clarity for teachers around with for meaningful information. Right when each assessment has a clear role, whether it's screening, progress monitoring or a curriculum a curriculum assessment, teachers can use that information to adjust their teaching, to inform their small groups, to identify when a student may need more targeted intervention. And I think when those systems are all working really well, like assessment doesn't feel very separate from instruction. And I think that's a that's an important piece of all of this. And when I also think about how that supports teachers by helping them know what to teach and helping them use their time more wisely, right? There's never enough time in the day by targeting who needs this additional support, and most importantly, whether, if what they're doing is actually working. So I think those assessment systems are absolutely critical.

Stacy Hurst:

Yeah, I love that, especially piggybacking on what Andrea said your. Comments, Elizabeth reminded me, when you're comparing student, one student, to others in the class, going back to donell's question about environment, you can, as a teacher, you can not only make instructional decisions for that individual student, but also about your teaching in general, maybe for the whole class. So I think that's really a powerful tool. And as I'm thinking about that, especially intervention, sometimes other people are involved in providing aspects of that intervention for your students. So, and I know that day two is focused on assessment and in the system of the level of schools and districts and so why do you think that was important to include? I guess maybe I'm answering my own question here, but in conversations about those district level and policy level data at the summit, especially when we're talking about things like intervention and maybe accountability assessment used is accountability Stacy.

Unknown:

So we know that there will be a lot of teachers in the audience, and that they absolutely need to know what to teach next and how to interpret those classroom data pieces. Think about small group planning and those really practical pieces. And I really want to honor that that's so important for teachers, but school and district leaders have different data needs, right? They don't need to know which students in your classroom are still working on segmenting and blending three to four sounds, right? They don't need to know that level, but they do need to have that big picture of the data needs in a school or district so that they can allocate resources, so that they can look and compare, right, from one system to another, one school to another. They have accountability purposes. They need to think about, are we providing education equitably across different types of groups of students? That's an important question for them to have easy access to. They can also use that big picture data to compare progress over time. We implemented a new curriculum, and that was rough for while we were implementing it, but now, are we getting better in our use systematically across the system? And I want teachers to know that that data is important. Just because it doesn't answer my question about what to do tomorrow, it doesn't mean that it doesn't have a purpose and that it's not important to the system. And so I want teachers in the room while we're talking to district level leaders about how they're thinking about and using data. And there's also some components in selecting assessments that I don't think every teacher needs to be well versed manual about a test that a district is purchasing, but somebody at the district level needs to, and needs to understand all of those technical adequacies and components, and so I want them to hear that that a different type of expertise. And then the policy piece we included, because we're seeing more and more policy requiring assessment, and there's some real opportunity there to screen students earlier, to be using the word dyslexia more often, but when it's not always aligned with the research, or when it's not well aligned with the other requirements in a certain system, we're seeing that backfire, and really it's causing teachers to lose trust in these assessments when we were not being thoughtful about that. So we wanted to lift up some folks, state level, leaders, researchers, who are thinking carefully about policy. Dr Kemeny Burke is going to moderate that panel and really talk about, where are we doing this well, and where do we need to make some changes to be more effective,

Stacy Hurst:

which is increasing knowledge at every level. And we see that in our own state, Donell and I live in, there's been revision to recent legislation that doesn't always reflect that alignment with research, but, but if teachers and other experts know they can lift it up and point it out to those policy makers, which I think is really Important. And Elizabeth I because you've had such a variety of roles in education, if it's okay, if I ask you to kind of give your perspective with like, how, what are the differences in the way that you looked at assessment as a classroom teacher and then as a principal?

Unknown:

Oh, I like that question. You know, a classroom teacher, I definitely was trying to do the best I could with my students, right? I was focused on taking that data and taking action. But I I agree with what, what Andrea said when, when you're in the classroom, you you don't always know how leaders are looking at data differently, right? I think the resource allocation is a is a shift in thinking, looking at the needs of of your students, and thinking about how, as a leader, you may put more supports in place right in a certain grade level or. In a certain classroom based on what the data is showing. And I don't know that everybody takes that into consideration, but what a great what a great tool to have at your fingertips and to be able to justify that, even if it's just for certain areas of the day,

Stacy Hurst:

which makes sense. So maybe an instructional coach or literacy coach, could spend more time with specific teachers who feel like they need more support and their data is

Unknown:

showing them, or perhaps there are other staff who are underutilized at a certain point of the day, and they could push into that classroom right to provide some targeted support for a win time, or the literacy block or just little pieces, but I think that can always, that can always guide your decision making.

Stacy Hurst:

Yeah, that's a great thanks for answering that question. What are some mistakes that districts do make when they're implementing assessment systems?

Unknown:

You know, I think one of the big mistakes that I see most often is around taking the screening data, or even state assessment data, and then automatically enrolling students in an infirm in an intervention without having the information you need about the intervention that would be most appropriate for them, Right? So sometimes it could just be like, up, all of these kids were high risk, and so they're, they're going into this intervention, and this inform this intervention could be a generic thing, right? It could be a study hall. It could be, who knows, but it would be a hard time correctly targeting that intervention without having all of that accurate and diagnostic information to help you, help you guide that.

Stacy Hurst:

Yeah, I've actually, I've seen that in action, and it isn't one size fits all. It's like they skip the diagnostic part or they don't know how to interpret that diagnostic information. Yeah. And the

Unknown:

other thing I would add to that that I see as a mistake is not revisiting or updating our assessment plans regularly, right? So new things come in, new curriculums come in, we always are learning more and so constantly thinking about like, do we have too many of certain types of assessments. Is everybody clear on how we're using the data? Do we have something missing, right? We just had this conversation with a local district about how we're not assessing language comprehension at all. What are we doing here, right? So I think those are really important pieces to consider.

Stacy Hurst:

Yeah, I appreciate that also, because knowledge of the people who are helping to inform those decisions hopefully improve. I'm just having flashbacks of when I was a literacy coach, and any buddy who's listening who worked with me as a literacy coach knows this that you do, you make changes based on new learning, and I would be very frustrated, and so with the system, if we just stuck with what we thought, you know, just because we thought that at one particular time. So I love that you mentioned that the summit not only includes panels, but also those hands on workshops, which I really appreciated that in the design. And one in particular is LED, going to be led by Adria truck and Miller and Anita Archer. So what will educators experience in these workshops that might be different from a traditional conference session?

Unknown:

Yeah, so this was planned specifically based on feedback from past summits and conferences. We always want to be sure that we're drawing that line from research to practice, and this is one great way to do it. So after a couple of panels and you really listen to some experts, share some thinking about research with you, we want you to have a chance to apply it at the summit. So Adria truckenmiller will do two workshops that share a real school's language and literacy data, and says participants will be able to look at that data like beginning of your screening data, think in both language and foundational reading skills in an element school, and then have some prompts to think about right what, how many of your students are meeting benchmark, what percentage are predicted to have success with the core curriculum and which, which number on this chart are you looking at to answer those questions? Which students do you have further questions about, what additional information would you need to make small group decisions and some additional data right and talk you through that? So she's going to do that twice, thinking about both the classroom data lens and that district data lens the accountability and equity and big decision making that happens at the district level. And then most of the conference, most of the summit, is really focused on literacy data, so monitoring the development of literacy and language skills. But Dr Archer is going to give us just a really gentle reminder about the importance of monitoring our instruction. In along with students reading skill development. So if you are thinking about intensifying your instruction or intervention to meet student needs, and you're not seeing their progress improve like you're hoping, that you might need to take some data on your instruction, you might need to think about how many opportunities to respond does the student have in this particular group? Would it be better for them to be seen in a smaller group or a different group so that they could have more opportunities to respond? And would that then increase their skill acquisition? So she's going to be just thinking a little bit about what to look for in terms of that effective instruction and intervention so that we can further tailor our intervention.

Stacy Hurst:

Great. I'm smiling extra big on that one because I recently completed the Anita Archer Academy, and I had a colleague of mine, another professor, come in and take tick marks of how many opportunities my students had to respond during and you will appreciate this during a review of what is the difference between phonological and phonemic awareness phonics. But that was really helpful. And just I love that you're you're, you planned for those actionable kind of things, because I was mindful of that. But until you had the quantifiable, I had the quantifiable data from the observation, which is data, right? I didn't realize, oh, wow, that's how many response opportunities my students are having. And could I improve on that? Or is it, you know, that I'm yeah, that I'm not doing enough, or something like that. So I think educators will have a lot to take away specifically. What do you want them to take away from from this opportunity? Because I know you guys have been thinking a lot about assessment, yeah, and its implications.

Unknown:

One, one of the threads I hope teachers take away from this is that reading and writing are expressions of language, and so we're going to talk really explicitly about language throughout and meaning oral language in addition to written language. And so you're going to hear on every panel somebody that can really highlight that that getting data and thinking about language and language comprehension is going to assist you in analyzing and looking at your literacy data.

Stacy Hurst:

Great. Elizabeth, what are you hoping they'll take away?

Unknown:

I just I hope that they one come with with open ears and open eyes and leave with additional clarity and think about how this can improve their classrooms. And I know we're going to have some leaders there too. Our goal, one of our big goals, was to have teams come to work together, right and then reflect on how this can improve their overall systems. So I'm excited to see the impact of that

Stacy Hurst:

work great. One thing I know, because you have been focusing on assessment, I'm sure at the reading League, it's been the topic of a lot of conversations in the past maybe six months or a year, and there will be, unfortunately, many educators and administrators who won't be able to attend. So in thinking about assessment broadly, if a teacher who's listening to this podcast today wants to improve how they use assessment tomorrow, what is one thing that they should start doing?

Unknown:

From my perspective, I think one thing you could start doing, especially in those elementary grades like kindergarten through fifth grade, is is listen to a student read aloud for one minute using those grade level curriculum based measurement, we often call them oral reading fluency passages. There is such a richness in the research behind that measure, and a lot of research on the application of that measure. So one study that looked at teachers using that oral reading fluency that one minute of reading to their student aloud consistently outperform reading outcomes when compared to teachers who do not use that measure. In particular, it's not just about having a screen or a progress monitoring tool. There really is something special about listening to your student read aloud on that grade level passage for one minute, and so we're going to talk a little bit about that at the summit. But that's one thing that I hope that teachers really think about, especially as we have more and more computer tests that students are giving sight of the value of listening to your student read aloud for one one quick minute. It almost feels too good to be true that you could get so much information out of that one minute read. But it's such

Stacy Hurst:

a value. Yeah, that's great. And coding it, I think adds, I mean, that's part of the assessment, but not simply listening to your student read, but coding it, as we do in CBMs. I think that's important. Elizabeth, how would you answer that question?

Unknown:

Oh, I think the same. I really I think there's a lot of power in one on one assessment. Assessments right in early childhood in particular. So I think a lot of schools have, and districts have shifted to some of the computer based testing. And I understand you know that the time that the efficiency that that can lend itself to, but I think the value in listening to students is top notch, and so that's really what's going to inform your decisions. And being able to take note of some of the little nuances that happen when you're sitting one on one with students can make help make decisions so much clearer for you in terms of next steps, versus solely relying on computers to give you all of your information.

Stacy Hurst:

That's great advice. I know Donell and I have talked about this very thing before as well. You can't tell when a student takes their eyes off the page if they're just on a computer, and there are limitations as far as oral language goes for what currently what computers can and cannot ask of a student and gather what kind of data they can gather from that so. And the other thing, I think, I would just add to all of that for our listeners, again, as your knowledge improves, those decisions are going to be easier to make. When you look at that data, you're going to know how to apply that so we want to continue to increase that, which I think the summit is a perfect opportunity for that. So I'm so excited for that, Donell, I hope you don't have FOMO. I know, but see, Donell is so much better. We've talked about this before, too. When she was at the summit that I could not attend, she was giving me practically hour by hour updates, which was very distracting in a good way. But then I fail Lindsay and Donell when I'm when the expectation is that I will do that. So I'm just so into what and Donell was a reporter in her former life, so she has honed that skill much better than I have, so I'm looking forward to being there and Donell, we'll just talk about it when I'm back. I just want to set expectations accurately. Andrea, Elizabeth, thank you so much for joining us today. This has been a great conversation, and I hope that those of you who feel like you want to attend the summit, maybe haven't registered yet, that you can, hopefully it's not full, and that you can join us in Syracuse, New York. Any other parting advice or anything to add as we conclude

Unknown:

just that we're really excited, Stacy, I'm glad you'll be there in person. It really is going to be incredible to have 19 assessment experts talking about a really depth of knowledge about assessment that I haven't seen in a lot of conferences. And so I hope that people really walk away with some practical tools, increased knowledge and a commitment to keep applying what we know in the research on assessment, hand in hand with what we know about reading. Yeah, great, yeah. I agree, knowing better, doing better and continuing to develop our systems and practices.

Stacy Hurst:

Well, a great, concise way to wrap the conversation. Thank you, Elizabeth, and thanks to all of our listeners for joining us for this episode of literacy talks, and we hope you'll join us for the next episode.

Unknown:

Thanks for joining us today. Literacy talks comes to you from Reading Horizons, where literacy momentum begins. Visit readinghorizons.com/literacytalks to access episodes and resources to support your journey in the science of reading.