I think this is profoundly wrong for one simple reason: assessment is a modeling problem.
Think of your knowledge of some topic -- evolution or cell metabolism or ordinary differential equations -- as a network of related concepts, facts and techniques in your mind. The beginner's network might be missing some important connections and contain extraneous, misleading ones. The expert's network is rich but well-organized.
In assessment that goes beyond simple factual questions like "what are mitochondria?", we are implicitly trying to find out whether a student's network is more like the one on the left or the one on the right. The more expert-like the network, the better the student understands our subject.
Of course, we can't observe this network directly. To a teacher, a student's mind is a black box. Therefore, we poke and prod the black box by asking the student questions and use the answers to build our own models of the student's knowledge network. Particularly valuable are those questions whose answers are easy to figure out if subject matter knowledge is complete and well-organized but difficult or impossible otherwise. If a student answers these types of questions correctly, they probably understand the subject well.
Unless, of course, the student has explicitly learned how to answer the question that you asked without figuring it out. Then the process is short-circuited and we are left without a way of assessing what a student actually understands. Ben Orlin writes about this in Math with Bad Drawings:
Need to prove these triangles are congruent? Do this. Need to prove that they’re similar? Do that. Need to prove X? Do Y and Z. I laid it all out for them, as clean and foolproof as a recipe book. With practice, they slowly learned to answer every sort of standard question that the textbook had to offer.
Months passed this way. But something wasn’t clicking. I kept seeing flashes and glimpses of severe misunderstandings—in their nonsensical phrasings, in their explanations (or lack thereof), in their bizarre one-time mistakes. Despite my best intentions, something was definitely wrong. But I didn’t know what.
And, more worryingly, I didn’t know how to find out.
I’d already coached them how to answer every question in the book. How, then, could I diagnose what was missing? How could I check for understanding? For every challenge I might give them, every task that might demand actual thinking, I’d equipped them with a shortcut, a mnemonic, a workaround. The questions were like bombs defused: instead of blasting my students’ thoughts open, they now fizzled harmlessly.
Orlin is describing his mistakes as a novice teacher. But this is the inevitable consequence of the "alignment" being pushed by proponents of scientific teaching. They would probably say that the student should initially figure out the procedure instead of being taught it, and this might indeed be better (or not), but it remains true that when the exam rolls around, all we will see is how well the students remember what they were taught. We will have lost our tools for modeling their minds and assessing their understanding.