Recently, I have seen administrators across Michigan flooding Twitter with celebratory tweets about the statewide switch to the PSAT 8 as the high-stakes accountability test for 8th grade students and teachers. Most of these tweets sound very similar: The PSAT 8 is the test we need. It provides schools, parents, and teachers with the data they need to improve. Putting aside the eerie similarity of the wording, let’s be clear about something: they are all very wrong. We need to be honest about this. We need to have an honest conversation as a state about how these ideas are misguided at best, dangerous at worst.
To begin that honest conversation, I’ll start with my contention that the PSAT 8, in fact, does none of the things these administrators and organizations claim it does. I believe this is true for at least four reasons:
1. The PSAT 8 has a fundamental purpose. It is meant to separate students.
The SAT Suite of Assessments (designed by the College Board) is designed to track one thing, and one thing only: college readiness. These tests are meant to separate students into two groups: college ready and . . . not. Here is where we hit our first snag. The policy of using the PSAT 8 for accountability purposes is in direct contention with the purpose of the assessment. The PSAT 8 is designed to be predictive of student performance on the SAT which is designed to give an indication of a given student’s likelihood of succeeding in entry level college coursework. Success is, in this case, defined as a probability of receiving a B in a credit-bearing course during freshman year.
There are a number of issues at play here. First, grade 8 educators have a responsibility to prepare students well in mathematics as defined by the 8th grade mathematics content standards in Michigan. That, in no uncertain terms, is their mission. Implementing PSAT 8 will drive schools to prepare students to perform on the PSAT 8, not on the end of year learning targets laid out in the standards which have been adopted by our state Board of Education. Second, it is always dangerous to use an assessment for a purpose other than its designed and intended purpose. This assessment is meant to measure college readiness as defined by the College Board, not students’ thinking and understanding of given mathematical content. Third, this separation and labeling of students does nothing to help those who are deemed “not college ready” by this assessment. Indeed, it likely dooms them to remedial tracks throughout high school, resulting in lowered expectations for them across the board. And this is the real danger. The assessment contributes to the gaps in student performance and learning that we currently see.
2. The PSAT 8 is a norm-referenced assessment.
This concern is a fundamental one in many ways. Our education system is based, currently, on standards or criteria to be met by the end of each successive school year. In such a system, assessment is best designed to determine student understanding in those areas and students’ abilities to meet the criteria laid out for them. The PSAT 8 is not designed in this way. This assessment provides an overall score in mathematics as well as several sub-section scores and oftentimes gives students information about the percentiles in which they reside. This means that the data are compared across the national cohort of students taking the test at that time. And scores are assigned relevance based on the performance of other students, not on any static criteria.
Normative data used for high-stakes decisions within a criterion-referenced system is nonsensical. Educators’ time is better spent considering how students understandings match the criteria we wish them to meet than considering how students compare to the average performance or to other students in their peer group. Certainly, there are “alignment documents” that give indications of which of the standards fit into the buckets of content of the assessment—but that is not the same as using an assessment designed to assess the criteria themselves.
3. The PSAT 8 provides data that is highly uninformative about classroom instruction.
One of the most detailed reports the PSAT 8 portal will provide is the Question Analysis Report. Educators love this report. They. Love. It. And for all the wrong reasons. I get it. The report gives you the performance of your students, the state’s students, and the nation’s students on a given assessment item. It shows you the percentages of students who chose each distractor. It even gives you the item to look at! The report feels like a gold mine. But ultimately, it’s fool’s gold. Here’s why.
As educators pour over this report they feel as though they are getting very accurate data about student performance. But these item-based analyses are dangerous because they easily lead to solutions designed to fix problems that may not exist. Let me illustrate. I sat with two district administrators and looked over a set of PSAT 8 data. We spent almost two hours looking at students’ performance on questions limited to the Heart of Algebra strand on the PSAT. After all of this work and my attempts to get them to see connections among the items and to come up with alternative explanations for students’ choices, the big takeaway for them was that they needed to work on systems of linear equations more.
There are two concerns that surface in the example above. First is the lack of information about students’ thinking. We certainly know which questions many students got wrong and we even know which distractors students chose most often. But any attempt to figure out why students chose those distractors is stymied by our lack of information. While it is tempting to say that the distractors were designed to take advantage of common misconceptions, that explanation is ultimately self-defeating. There are a number of potential reasons for a student to choose a given distractor, only one of which is that their misconception matches the one intended by the item writers. Without looking at student work in detail or talking to the students themselves, educators can never be certain they have even a vague idea of students’ issues. Second, when summarizing the efforts at the end of a meeting, educators are (understandably) drawn to particular examples of problems that were a struggle for students (like the systems problems in our example). But these particulars are, in all likelihood, small percentages of the kinds of problems students will likely see on a given form of the PSAT. To be clear, in the example, there were at most four items that were related to students’ understandings of systems of linear equations, a small portion of the overall assessment. This kind of item-based decision-making is dangerous and will oftentimes lead to solutions that miss the mark.
4. The PSAT 8 assesses content that grade 8 teachers and students are not responsible for.
There are two concerns associated with this issue: fairness and accountability. I would argue that it is inherently unfair to give an assessment to students which contains content that they have not learned yet, regardless of whether or not success on that content would only increase their score above the proficient mark. The PSAT 8—indeed any high-stakes assessment of this nature—contains items on which it is expected that students will not be able to perform. After all, the test has to separate students somehow, right? Here is the heart of the unfairness. The PSAT 8 contains content that is not present in the grade 8 standards because it is meant to predict performance on the SAT, not assess 8th grade content.
As for accountability, if I were an 8th grade teacher, I would be absolutely irate that I was being measured for accountability and evaluation by a tool that contained content for which I was not responsible. And because educators take their evaluations seriously, many will feel as though they have no choice but to teach the more advanced content to everyone in the hopes that it will better their evaluation results. Systemically, this might lead to a practice of requiring all 8th grade students to take Algebra 1, a practice that we learned from California is detrimental to students. And maybe that is the most insidious problem of all: with the best of intentions, these decisions are made based on the needs of adults and not on the needs of students. Accountability. School data. Allow schools to improve. Very few of these sentiments mention students specifically. And that’s because the assessment is not designed to be friendly to students—it is designed to separate and label them.
To close out this discussion and make it, perhaps, more productive. I ask the following questions:
To begin that honest conversation, I’ll start with my contention that the PSAT 8, in fact, does none of the things these administrators and organizations claim it does. I believe this is true for at least four reasons:
1. The PSAT 8 has a fundamental purpose. It is meant to separate students.
The SAT Suite of Assessments (designed by the College Board) is designed to track one thing, and one thing only: college readiness. These tests are meant to separate students into two groups: college ready and . . . not. Here is where we hit our first snag. The policy of using the PSAT 8 for accountability purposes is in direct contention with the purpose of the assessment. The PSAT 8 is designed to be predictive of student performance on the SAT which is designed to give an indication of a given student’s likelihood of succeeding in entry level college coursework. Success is, in this case, defined as a probability of receiving a B in a credit-bearing course during freshman year.
There are a number of issues at play here. First, grade 8 educators have a responsibility to prepare students well in mathematics as defined by the 8th grade mathematics content standards in Michigan. That, in no uncertain terms, is their mission. Implementing PSAT 8 will drive schools to prepare students to perform on the PSAT 8, not on the end of year learning targets laid out in the standards which have been adopted by our state Board of Education. Second, it is always dangerous to use an assessment for a purpose other than its designed and intended purpose. This assessment is meant to measure college readiness as defined by the College Board, not students’ thinking and understanding of given mathematical content. Third, this separation and labeling of students does nothing to help those who are deemed “not college ready” by this assessment. Indeed, it likely dooms them to remedial tracks throughout high school, resulting in lowered expectations for them across the board. And this is the real danger. The assessment contributes to the gaps in student performance and learning that we currently see.
2. The PSAT 8 is a norm-referenced assessment.
This concern is a fundamental one in many ways. Our education system is based, currently, on standards or criteria to be met by the end of each successive school year. In such a system, assessment is best designed to determine student understanding in those areas and students’ abilities to meet the criteria laid out for them. The PSAT 8 is not designed in this way. This assessment provides an overall score in mathematics as well as several sub-section scores and oftentimes gives students information about the percentiles in which they reside. This means that the data are compared across the national cohort of students taking the test at that time. And scores are assigned relevance based on the performance of other students, not on any static criteria.
Normative data used for high-stakes decisions within a criterion-referenced system is nonsensical. Educators’ time is better spent considering how students understandings match the criteria we wish them to meet than considering how students compare to the average performance or to other students in their peer group. Certainly, there are “alignment documents” that give indications of which of the standards fit into the buckets of content of the assessment—but that is not the same as using an assessment designed to assess the criteria themselves.
3. The PSAT 8 provides data that is highly uninformative about classroom instruction.
One of the most detailed reports the PSAT 8 portal will provide is the Question Analysis Report. Educators love this report. They. Love. It. And for all the wrong reasons. I get it. The report gives you the performance of your students, the state’s students, and the nation’s students on a given assessment item. It shows you the percentages of students who chose each distractor. It even gives you the item to look at! The report feels like a gold mine. But ultimately, it’s fool’s gold. Here’s why.
As educators pour over this report they feel as though they are getting very accurate data about student performance. But these item-based analyses are dangerous because they easily lead to solutions designed to fix problems that may not exist. Let me illustrate. I sat with two district administrators and looked over a set of PSAT 8 data. We spent almost two hours looking at students’ performance on questions limited to the Heart of Algebra strand on the PSAT. After all of this work and my attempts to get them to see connections among the items and to come up with alternative explanations for students’ choices, the big takeaway for them was that they needed to work on systems of linear equations more.
There are two concerns that surface in the example above. First is the lack of information about students’ thinking. We certainly know which questions many students got wrong and we even know which distractors students chose most often. But any attempt to figure out why students chose those distractors is stymied by our lack of information. While it is tempting to say that the distractors were designed to take advantage of common misconceptions, that explanation is ultimately self-defeating. There are a number of potential reasons for a student to choose a given distractor, only one of which is that their misconception matches the one intended by the item writers. Without looking at student work in detail or talking to the students themselves, educators can never be certain they have even a vague idea of students’ issues. Second, when summarizing the efforts at the end of a meeting, educators are (understandably) drawn to particular examples of problems that were a struggle for students (like the systems problems in our example). But these particulars are, in all likelihood, small percentages of the kinds of problems students will likely see on a given form of the PSAT. To be clear, in the example, there were at most four items that were related to students’ understandings of systems of linear equations, a small portion of the overall assessment. This kind of item-based decision-making is dangerous and will oftentimes lead to solutions that miss the mark.
4. The PSAT 8 assesses content that grade 8 teachers and students are not responsible for.
There are two concerns associated with this issue: fairness and accountability. I would argue that it is inherently unfair to give an assessment to students which contains content that they have not learned yet, regardless of whether or not success on that content would only increase their score above the proficient mark. The PSAT 8—indeed any high-stakes assessment of this nature—contains items on which it is expected that students will not be able to perform. After all, the test has to separate students somehow, right? Here is the heart of the unfairness. The PSAT 8 contains content that is not present in the grade 8 standards because it is meant to predict performance on the SAT, not assess 8th grade content.
As for accountability, if I were an 8th grade teacher, I would be absolutely irate that I was being measured for accountability and evaluation by a tool that contained content for which I was not responsible. And because educators take their evaluations seriously, many will feel as though they have no choice but to teach the more advanced content to everyone in the hopes that it will better their evaluation results. Systemically, this might lead to a practice of requiring all 8th grade students to take Algebra 1, a practice that we learned from California is detrimental to students. And maybe that is the most insidious problem of all: with the best of intentions, these decisions are made based on the needs of adults and not on the needs of students. Accountability. School data. Allow schools to improve. Very few of these sentiments mention students specifically. And that’s because the assessment is not designed to be friendly to students—it is designed to separate and label them.
To close out this discussion and make it, perhaps, more productive. I ask the following questions:
- What kinds of data do teachers actually need to improve their classroom practice on a daily basis?
- What kinds of data can schools collect easily that give insight into the effectiveness of their curriculum?
- What alternative types of assessments might allow us to get at what students know and can do?
- How can we have an honest conversation about how we might effectively use PSAT 8 data to help schools and teachers?
- What can the PSAT do for us, based on its intended purpose?
- What can’t it do for us, based on its intended purpose?