Tag: data manipulation

TEA Announces Higher Passing Standards; TPERN Smells a Rat

TEA Commissioner Michael Williams announced that this year the STAAR “cut scores” will increase as a way of showing enhanced “rigor” in the demands placed on Texas students. Planned phased in increases in STAAR scoring had been delayed since the implementation of the system due to flat results in scores. This last spring we got more of the same: flat scores. In other words, now five years into STAAR, students do no better than they did when the assessment was rolled out as a new and unknown entity. In five years, we have become no better at preparing students for this assessment. So the answer is obvious: raise the passing standards. (/end sarcasm).

Raising standards against a backdrop of flat results leads to an inevitable conclusion: passing rates will be the worst ever this year. We should see every single grade level and subject drop significantly in its passing rate, right? Well here is my prediction: passing results will remain flat. Here is my second prediction: flat results against raised standards will be hailed as a success and as evidence that the assessment system is working.

Why do I see this happening? Simple. Because the TEA will manipulate data to achieve the outcome it seeks. I want to cite two specific examples.

December 2014 EOC retests and Spring 2015 Math STAAR.

In December 2014, students who had failed STAAR EOCs in high school retested in the final administration before the legislature met. Over 75,000 seniors were threatened with non-graduation at that time. In response, bills had already been filed at the legislature to permit those seniors to meet alternative graduation standards. STAAR supporters were irate that the Texas pass or go home standard was up for re-evaluation. Suddenly, with no explanation, almost half of those seniors passed. Pass rates for re-testers on the EOCs reached all time highs. The number of kids affected went way down. However, by the time spring EOCs rolled around, we returned right back to the traditional passage rates we had always seen.  But during the legislative session, the TEA got to beat its chest about these great results the students had achieved.

In the 2014-2015 school year, Texas introduced new math TEKS. Students struggled, parents screamed, teachers sweated. Students were asked to do work that previously had been taught a year or two later in their academic careers. Fourth graders predictably struggled with sixth grade math. The TEA took two steps. First, it pushed assessment later into the spring to give students more time to learn. Second, it decided not to use the STAAR Math for retention purposes in 5th or 8th grade. When the assessment came around, many parents reported that their students said it was easy, even though they had struggled in their math class all year. When the results came out, amazingly they mirrored the passage rates from previous years.

How can this be? In both instances where STAAR was under a huge spotlight, the results turned out way better than anyone predicted. Well, it isn’t rocket science. If you select easier questions for the assessment, then you can control the results. Having looked at EOC assessments from different years, I truly believe that is exactly what happened with the December 2014 English 2 EOC. Reports from parents would indicate a similar outcome with math. Based on past history, I have no doubt that the TEA will make sure that its contractors craft an assessment that will meet previous passing rates, even with higher cut scores. This will then be used to show that STAAR is working and the system is valid.

Let’s see what happens, but remember this prediction.

How the TEA Forces Schools to Manipulate STAAR Data

In the El Paso Times recently, the paper reported on a petition by the Texas Education Agency to revoke the certifications of 11 educators accused of being part of a cheating scandal.  According to the paper, “[t[he petition accuses most of the respondents of participating in a scheme to falsify federal accountability reports, or knowing of the scheme but doing nothing to stop it.” (Full report here).

Federal accountability reports can cover many things, but we know one thing it covers is the progress of schools, districts and the state in meeting the No Child Left Behind Act’s requirements that every school make Adequate Yearly Progress (AYP).  As we explained in our article on Data Manipulation, absences on test days hurt districts more than failed assessments because of the formula for calculating AYP.  The TEA is engaged in a scheme to distort the number of students actually assessed by the STAAR exam.

People may disagree about the STAAR testing system, but one thing we can all agree on is that a student who does not take the test has not been tested.  They have not been assessed.  No data has been captured with which any assessment of academic readiness could possibly be made.  This is true whether the student is sick or present but refuses to be tested.  Any action resulting in a report that claims a student that refuses the test has actually been tested is misleading, if not overtly false.  Yet that is precisely the system that the TEA not only tolerates, but insists that districts implement.  According to Canyon ISD, this directive comes directly from the TEA’s Director of Test Administration.  Yet nobody from the TEA is being investigated or threatened with having their education certificates revoked.

In Amarillo, a parent sent Canyon ISD a letter pointing out that there are two other codes available to accurately report that a student has not been tested, and asking that her daughter, who had refused the assessment, not be reported as having taken the assessment.  In response, the District sent this letter:

we score refused tests

Now this may appear innocuous on its face.  Assigning a zero for not taking a test would be a common tactic in the classroom.  However, with an assessment designed to meet federal accountability standards, the effect goes beyond the score report placed in the student’s file.  It turns into a representation to the federal government and the taxpayers of Texas that the student has actually been assessed.  The student, by Canyon ISD’s own admission, refused to be assessed.  She was not tested in any way.  But consider this data box from the statewide summary report generated by the TEA.

how s becomes a lie

As you can see, the report clearly contemplates that some students will not be tested for reasons other than absence.  Yet not a single student who refuses the test is accurately reported in that category.  Instead, their data is lumped into the number of students actually tested.  Their zero becomes just another student that does not meet minimum standards.  Because there is no score averaging in accountability assessment, a zero means the same thing to a school as a student who fails by one question.  However, for the district and TEA, the zero becomes evidence that the 95% test participation requirement has been met.  These numbers then get placed on federal reports and are used to justify continued receipt of federal funds.  Perhaps it is time for someone to investigate whether the people who came up with this data manipulation tactic are “participating in a scheme to falsify federal accountability reports, or knowing of the scheme but doing nothing to stop it.”

The TEA, STAAR and Data Manipulation

Consistent reports from the April administration of the STAAR exam show a disturbing trend for parents who send their children to school and refuse the test.  Whether the child ever opens the test booklet or not, the TEA is instructing all districts to mark the exam S for “Score.”  This code ensures that the student is counted as participating in the STAAR examination and places a score of zero into the record of the child.  The TEA’s rationale for this is contradictory, particularly given the existence of other more appropriate codes for a refusal.  Both codes * and O more accurately represent the circumstances that exist when a child refuses to take the STAAR exam.  So why does the TEA mark the exam “Score” and record a zero for the child?

Lisa Cottle, with the TEA, states that TEA is required by statute to administer the exam to all students.  This is true, but that is a separate question from whether the exam is, in fact, taken by the student.  The TEA also contends that the education code requires a demonstration of proficiency for grade promotion, and the STAAR test is one measure of proficiency.  However, this rationale completely lays bare the lunacy of scoring a refused exam.  What could a zero on an exam that was not taken possibly tell a grade placement committee about the student’s academic readiness?  If the TEA cared about accurately evaluating academic readiness, they would assure that no misleading scores were contained in a student’s records.  Yet coding an untaken exam as S for score has the precise opposite effect.

So why, then, are school districts adamant about scoring refused STAAR exams and recording results?  The answer is two simple words: data manipulation.  Under the federal No Child Left Behind act, schools are required to make Adequate Yearly Progress (AYP) toward total proficiency.  The STAAR test is Texas’s measuring stick.  However, the NCLB makes sure that schools can’t cherry pick the test takers by requiring that 95% of all students in all subgroups take the annual tests in order to meet AYP.  In a small school or demographically small subgroup, even one child missing the test can significantly impact that participation total.  In fact it may more drastically impact AYP attainment than failing the assessment with a zero.

To understand why, you must understand how AYP works. AYP is an improvement based index.  Thus, if a school has 30% proficiency in a subgroup one year, but 37% the next, it could meet AYP even with 63% of students in that subgroup failing to demonstrate proficiency.  However, if the participation rate drops below 95%, the rest of the results don’t matter — AYP cannot be met.  Thus, for a school, it is better to fail a child but report that he participated than to tell the truth that he was not tested.  A failing score hurts the school less than non-participation.  The impact on the child is unimportant to the data gatherers.  It’s all about making the numbers.

This is data manipulation at its basest level.  The school is lying to the state and federal government, and to all parents on its annual report card, when it represents that a child was tested when he was not.  But that is what the system has come to.  It is more important to claim people were tested when they weren’t than to accurately report that a child was not assessed.  The TEA supports this subterfuge, and districts happily participate — all in the name of AYP attainment.

– Scott Placek