When I first heard they were shortening the exam by 30%, I thought that was unfortunate, since it would either make the test less reliable or impose a lower ceiling on the range it can assess.
However, I later learned that it will be dynamic — depending on how a student does in the first section on a given topic, he will be given either easier or harder versions of a subsequent section. That means that they'll be able to better assess students of all abilities with fewer total questions. Sounds like a win to me.
Of course, dynamic testing like this is not possible with hard copies. Seems like a decent rationale for moving to computerized testing, though I would still be fine if they did it hard copy, for 3 hours.
Shorter is good, but dynamic tests (called "computer adaptive tests" or CAT in the industry) make it difficult to follow longstanding best-practices for test taking. When I took the SAT, I could quickly scan through the questions in the current section and mark those that looked difficult, and save them for the end. With a CAT test, it's typically one-question-at-a-time, and if you're a strong test taker, they get successively more difficult. They are more efficient tests, but (IMHO) less pleasant to take.
That is true for adaptive tests that operate on a per-question basis. What I have read of the new SAT is that it operates at a session level. So if you do well on the entire first session, you get a harder version of the next session on that topic (reading or math).
Students should still be able to use the technique you mention to identify hard questions and save them for last.
> What I have read of the new SAT is that it operates at a session level. So if you do well on the entire first session, you get a harder version of the next session on that topic (reading or math).
That is positive. I've seen issues with adaptive tests not being resilient to accidents and brain farts: if a kid fat-fingers the response to a question early in the test, they may never get the computer to give them questions of a suitable difficulty later in the test. Thus they never even have a chance to get a score that accurately reflects their ability.
If the SAT is session-based, each session needs to be large enough that a single question can't tank the whole thing, but at the same time there also needs to be enough sessions to allow for properly dialing-in the knowledge level. In reality, the ideal situation is that the test should allow for an unlimited number of sessions, but each session is time-limited. Something along the lines of stopping once the test-taker gets below a certain score on 2 consecutive sessions.
> If the SAT is session-based, each session needs to be large enough that a single question can't tank the whole thing
I think each session is either half or a third of the test, for that subject. There could actually be more than two versions of the later session, to ensure that a student who makes a mistake or two isn't prevented from getting a relatively high score.
> make it difficult to follow longstanding best-practices for test taking.
Isn't that good? The effectiveness of test-taking strategies is a (minor) flaw in the SAT. People taking an ideal test would not benefit at all from learning strategies.
(The flaw is minor because all our methods for assessing people are gameable to some extent, and standardized general tests like the SAT are among the least gameable.)
Some of the math competitions I participated would penalize for skipping around. You were awarded +5 for correct answers and -4 for incorrect or skipped questions. It was not uncommon for negative scores. In one test, any stray marks were also marked as incorrect. No erasing, no changing a 7 to a 9, or anything was allowed. The questions leaned toward an increasing difficulty, but there could something very difficult followed by a string of much easier questions. So additional math had to be done to see if it would be better to stop or skip.
It's still equivalent to "normal" tests, because the cost of skipping and the cost of answering incorrectly are the same. A test in that format with N questions and a test in the normal "1 point for a correct answer, 0 for anything else" with N questions are related as follows, assuming C correct answers:
Score in the +5/-4 system: 5 C - 4 (N-C) = 9 C - 4 N
Score in the +1/0 system: 1 C + 0
It's basically a "9 points for a correct answer, 0 for anything else" test that simply starts with a score of -4 N.
It's very similar to the difference between Celsius and Fahrenheit.
how many normal tests would ever result in a negative score? it's quite devastating to any shred of morale one might have. test had 80 questions so a max score of 400. i personally never witnessed someone receiving anything over 300.
you're really trying to make something into something it is not.
Don't know how wide spread they are in other states, but it was part of the UIL Academics teams as statewide competition between school districts in Texas.
It's cute but doesn't make any difference. Add 4 to the score for every question and you get 9 or 0. The factor of 9 makes no difference so call it 1 or 0. Now your score is just the number of questions you got right, but the ranking is exactly the same as the original scoring system.
Again, another person responding with the same trope of a response totally missing the fact that assigning 1 or 0 means an always positive score. You will never know the shame of receiving a negative score. It's part of the competition whether it was designed that way or not, it is exactly what teenage boys have turned it into. The concept of a negative score is a pretty good motivator.
It depends if skipping means questions not answered or if it means questions skipped to answer later questions. But either way it's more like playing gameshow host than making a meaningful exam.
Howdy fellow Texan! I still use some of the "tricks" learned to do that test. However, I struggled for a long time in my higher math classes for not having any work to show as I was just doing it in my head. It was finally solved when told that the AP exams gave partial credit based on the work shown.
I also did the Calculator tests. It's why I learned to 10-key.
You're right - I still use number-sense tricks for fast multiplication :)
For Calculator, I wrote https://git.io/ti84rpn so that I could use the fast parenthesis-less Reverse Polish Notation without having to adapt to a whole new calculator keyboard.
They get extra time allocation to take tests - 50% more in most cases. The numbers I hear tossed around is that a substantial fraction (20% to 40%) of school kids in some of the elite public and private schools have IEPs. Clearly there are kids who need the extra time. But that many?
ADHD is the reason that I hear the most. The extra test taking time is not reported to colleges.
Combine 50% extra time with adderall. 1500+ SAT scores are not uncommon.
> Combine 50% extra time with adderall. 1500+ SAT scores are not uncommon.
That's not a great take. If you want to complain about overprescribing, go ahead. But Adderall is meant for people who need it to get closer to baseline. People abusing the prescription is not a good reason to criticise IEP adjustments.
Are stimulant prescriptions for ADHD carefully calibrated in dosage and time to get patients to a baseline? In my experience, doctors ask you questions to determine if you have attention deficits, then give you a prescription, which will be changed in various directions (different medication, increased dosage, etc) if you report problems; they very rarely in practice have you take an in-depth "attention measuring" test and then prescribe on that basis.
What is the baseline? Would anyone consider it cheating if they took extra or changed their dosing schedule to maximize their test score? I think most people would consider that rational behavior, not abuse. But it does call into question whether that's "fair" to people without the drug.
Fair comment and distinction between IEPs, medical support and over prescribing. I'm all for helping people get to a healthy baseline in their life with medical support and medication as necessary. IEPs are necessary and benefical to many, many kids.
> They get extra time allocation to take tests - 50% more in most cases. The numbers I hear tossed around is that a substantial fraction (20% to 40%) of school kids in some of the elite public and private schools have IEPs. Clearly there are kids who need the extra time. But that many?
An IEP is a catch-all term for anything that requires special adjustment for a student. What the adjustment is varies depending on the situation--there's a reason it's called individualized.
Having gone to one of those "elite public schools", it wouldn't surprise me if that large a fraction of kids had an IEP (hell, I did). However, in my experience, not one kid had an IEP that allotted them extra time to take tests. The only specific test adjustment I'm aware of was one kid who was so blind he needed the exams to be large print to have a hope of reading them.
My daughter has an IEP due to her dyslexia. She gets extra time on in class writing assignments and tests which have a writing component. So no extra time in general for her math or science tests. Although for science there are sometimes writing components for which she gets additional time. English, history, government, etc which are writing heavy she gets extra time. For several of her classes she is dual enrolled with a local college and we had to apply to the college to get accommodations for those classes. They were actually much more generous with the accommodations granted than her high school. The only one she takes advantage of is the increased time.
The College Board also granted her accommodations so she gets 50% longer on the SAT and AP tests. April will be her 1st attempt at the SAT. She has taken a few AP tests.
ADHD is not considered a specific learning disability so should not qualify someone for an IEP or accommodations from the College Board.
> However, in my experience, not one kid had an IEP that allotted them extra time to take tests.
There's a difference between the IEP that the school uses and the accommodation request for standardized testing. It would be pretty obvious to you if a student in your class were given extra time for in-class tests, but much harder for you to know if that student qualified for extra time on the SAT. If you knew when they were taking it then perhaps you could figure it out, but people who bend the rules to get extra time tend to not go around advertising when they're going to take the SAT, or at which test center, since that might out them as receiving extra time.
All this to say: I'm not so sure a classmate would know if a fellow student received extra time on the SAT.
You're aware that we generally took the SAT in the same kind of classrooms that we took our class tests, or the state standardized tests, or the APs? (Okay, the APs were largely in the gym instead of classrooms because there's several hundred people taking the AP exam at the same time, but same thing really).
If kids were getting extra time on standardized tests, we'd know.
> You're aware that we generally took the SAT in the same kind of classrooms that we took our class tests, or the state standardized tests, or the APs?
Sounds like a very different experience than what I had. SATs were on Saturday, at a different HS. There were various HS's to choose from, depending on where you lived. I went to a magnet HS (sounds like you did too), so kids were from all over, and they took tests at whatever school was near their house. Within each test center, we were then broken into rooms based on last name, IIRC. I typically knew only 1 or 2 kids in the room I was in (the rest were from other HS's), and would have had no idea whether any of my classmates had extra time.
Even in your situation, you wouldn't have known that a kid who took the SAT with you didn't also take it again, at a different test center, with extra time. Lots of kids take it more than once, and you can choose where to take it. Kids tried to keep this stuff on the down-low, either due to shame (because they actually had a disability) or to hide their undeserved accommodation (because they didn't have a disability).
If you can't opt out of the dumb-dumb version regardless of your answers to previous questions then the test is rigged. Not only would it not be directly comparable to previous test years, it would not be comparable to other versions of the test. Defeating the entire point (ranking students)
Comparability across test years is taken seriously by the folks that make these tests (see e.g. [1,2]). There are valid criticisms to be made, but flippantly calling the test 'rigged' is not constructive.
However, I later learned that it will be dynamic — depending on how a student does in the first section on a given topic, he will be given either easier or harder versions of a subsequent section. That means that they'll be able to better assess students of all abilities with fewer total questions. Sounds like a win to me.
Of course, dynamic testing like this is not possible with hard copies. Seems like a decent rationale for moving to computerized testing, though I would still be fine if they did it hard copy, for 3 hours.