Hits: 0
WPCNR GUEST EDITORIAL. By Jonathan S. Rodney. April 5, 2004: On October 15, 2003, Jonathan Rodney was one of the few persons to testify to the Assembly Committee on Education on high stakes testing in New York City. Mr. Rodney is a 1992 Graduate of White Plains High School, possesses a Bachelor of Science in Physics from SUNY, Binghamton, and a Master of Science in Electrical Engineering from Boston University. He is employed as an Optical Engineer in the Semiconductor Industry.
Here he makes his passionate case for continuation and improvement of Regents tests. His remarks center on a Regents panel report criticizing the Math A test of last June, criticized for being too difficult, off the curriculum and confusing. He exposes those fallacies in his written testimony filed with the committee. With the Math A Regents coming up again in June, it raises the issue of whether Math A will now be too simple a test:
My comments will focus on the June Math A Regents, with general implications.
Mr. Chairman, prior to enrolling at SUNY, I attended another college where I once took a math course which was incompetently taught. As it turned out, the professor was as bad at testing as he was at teaching, and I received a grade of A-, even though I had no idea what was going on. The passing grades allowed me to deceive myself for a time, and not face the fact that I was lost. Eventually, I realized, my grade not withstanding, I did not know the material, and would have to retake the course with another professor if I were to have any chance of continuing in math or science. Many students, of course, did not want to retake the class. They were simply glad to survive a nightmare, and since, they had good grades to show for it, why rock the boat? Of course, they had to find other majors.
Mr. Chairman, I am absolutely convinced that the greatest danger to kids’ progress is not unfairly difficult tests, but absurdly easy ones that lull us into a false sense of security. I am here to defend the June (2003) exam. It is a beautiful and valuable measure of our kids’ ability, and by all appearances I may be the only person in the state willing to say so.
When the Math A panel issued its August (2003) interim report, it claimed to identify several defective items on the test, as well as test construction defects. These defects were offered as proof that the tests were flawed. They were certainly taken as proof by the public, by teachers, by this committee. We should look carefully at those claims.
Too Wordy? Inability to Read at H.S. Level the Problem.
Allegedly, when students reached the midpoint of the exam, they encountered several unusually difficult problems in a row. These problems were too wordy, and induced excessive frustration, causing kids to “give up.”
Consider, for instance, question 27:
Tina’s preschool has a set of cardboard building blocks, each of which measures 9 inches by 9 inches by 4 inches.
Question 26: Seth has one less than twice the number of compact discs Jason has.”
Mr. Chairman, whatever hand-waving arguments the panel may have used about wordiness, the sad truth is that these problems are at a fifth-grade reading level.
Tina’s preschool has blocks. Our kids can’t handle that.
Regents Makes Poor Excuses
Some of the criticisms are comical. Dr. Brosnan claims it is unfair to ask students how many tickets to a school dance must be sold before it breaks even, because “break even” is an “economics term.” The drawing of a straw in a box is deemed defective because it fails to say, “not drawn to scale,” even though the labeling is crystal-clear, the box is three-sided dimensional (so students could not have used a ruler anyway), and the front face of the box is drawn to scale.
To give some idea of how ridiculously easy this (Math A) exam was, question 21 involved a stem and leaf plot. Now, I had never seen a stem and leaf plot. They were not in the curriculum when I was in school. But I could still solve the problem easily. Why? Because the test provided the answer key in the question.
It is possible for someone who had never seen a stem and leaf plot to get that question right. That’s the kid of low level of difficulty we’re talking about: Someone who had never taken the course could get that question right. This is not a hard exam.
Array of Questions Criticism Dismissed.
In finding number 1 of the panel’s final report, the panel expresses bewilderment at the different array of questions used over the years to test Key Idea 5, which calls on students to use measurement to “provide a major link between the abstractions of mathematics and the real world.” What the panel does not seem to grasp is that the cases cited in the table they provide are all the same problem.
The goal is not to test a specific computational skill. It’s to test whether, given a real-world problem, students can figure out what the appropriate computational procedure is. Key Idea 5, and Performance Standard 5A, say so.
That’s a great and bitter irony here. For years opponents of testing have claimed that state tests emphasize rote problem solving. Too much drill; too little real life application. But by insisting that all of the test questions emulate performance assessment examples in teacher guies, it is the critics of the Math A exam who will ensure that the test becomes a test of rote computation. Of course the questions differ wildly. That is the point: to find out whether students can take sundry life experiences and recognize them as manifestations of formulas they learned in the classroom.
Brosnan Backs off Claims of Defects in Final Report. (Unreported)
But there is something else very peculiar about the final report, something I haven’t heard anyone in the media point out. The final report backs away from the claims in the interim report. This is remarkable. Except for the content standard imbalance (in the Math A test), the final report does not make any specific allegations of defects in specific problems on the test. None. The interium report contains a laundry list of bugs. Except for the content standard imbalance, they have vanished in the final report. The final report is concerned about long-term anchoring problems, vagueness in curriculum, staffing issues at State Education Department, etc. But even the findings of problems in test development – and there are some good ones – have nothing to do with the June 2003 test being defective, compared to, say, the January test. Nothing. Dr. Brosnan backed off his claims of defects.
Now, normally one should consider only the final version of the report. People put all sorts of things into drafts; they should be held accountable only for the final conclusions. The problem here is that it was the claims of defective questions in the interim report that provided the justification for raising test scores in the first place.
Bait-and-Switch
The Brosnan panel pulled an elaborate bait-and-switch. They made a provisional claim that they found defects in the June 2003 Math A test questions. Therefore, the scores had to be raised. The school year having begun, they’ve come out with another report that doesn’t identify defects compared to previous exams. Yet no seems to be suggesting restoring the original scores.
Pythagorean Theorem basic. Should not be reason for a “throw-out.”
Now, it’s true that this test (June 2003 Math A) had three tests on the Pythagorean Theorem, and none on trigonometry. The problem is that the Pythagorean Theorem is much more basic than trigonometry. It’s sixth grade
Math. People who claim that the test was unfair because it focused on the Pythagorean Theorem rather than trigonometry make complete fools of themselves. It is not possible for a student to understand trigonometry better than the Pythagorean Theorem, because the Pythagorean Theorem is the foundation of trigonometry.
Indeed, if a student performs better at trigonometry problems than problems involving just the Pythagorean Theorem, that is evidence that the student has mastered nothing more than rote test-taking techniques. In other words, the Regents Exam has done exactly what it was supposed to do: Expose kids who don’t know what they’re doing.
Outrageous Quackery
The panel’s impressive-sounding credentials notwithstanding, their complaint smacks of quackery. Make no mistake: The omission of trigonometry questions was a defect in the test. But it does not affect the fact that students who could fail this test as is have not learned enough math to merit high school diplomas.
The fact that our students don’t know the Pythagorean Theorem should terrify everyone in this room. But instead of investigating this lack of knowledge, you’re investigating why the kids were asked about it too many times. It is outrageous.
Tidying Up
The final report did note that the June exam was more linguistically complex than previous tests. But unlike the interim report, the final report did not show that this was due to anything wrong in the June test, as opposed to something being wrong in the previous tests. Tina’s preschool has a set of cardboard building blocks. If this is really harder than previous exams, then it tells us that the previous exams were too easy, and too many kids were getting high school diplomas who did not earn them.
There were ambiguous problems on the test. Problem 14, for examable, is inexcusable. But the passing threshold was so lenient that it was not possible for students with a grasp of the key ideas, armed with calculators, to fail this test. It’s not possible.
Don’t Be Cowed by High Stakes Opponents.
I urge the committee not to be cowed by the perverse and corrupt phrase, “high stakes test.” In a system, even with portfolios, the smallest thing can make the difference between passing and not passing. Students who fail can retake the test. Driver’s license tests are high stakes, too. Does anyone propose abolishing them?
Achievement Tests Tell Us Important Things.
This test is telling us something important. It is telling us that we are producing students who cannot count, who do not understand probability, who cannot interpret or construct graphs, who cannot find the volume of a box, or the length of a straw, who cannot, when faced with a practical problem, figure out which algebraic approach to use.
During those few days in June when people were feeling really scared, that meant the test was doing exactly what it was supposed to do. The test was a sign that we’re graduating uneducated kids. We should feel scared.
The Test told the truth
And, how is it that, prior to the test, teachers were giving passing grades to students who couldn’t find the volume of a box? For many kids, the June (2003) Regents was the first time anyone had told them they didn’t know what they were doing. Their teachers had deceived them as to their ability. The test told them the truth.
That makes the test the best friend those kids have ever had in school. The raising of test scores can’t be undone. But other challenges will arise. I urge the committee to stand firm on the Regents exams. Protect them from their critics. These exams are the best friends some kids have ever had.
Don’t take them away from them.
Testimony of Jonathan Rodney, October, 2003.