Today we continue our review of how to meet the requirements ARC-PA’s 5th Edition Standard’s Appendix 14F requirements, which include your presentation of PANCE outcomes in your PA program, looking at how admissions, course grades, test grades and other points of data correlate with these outcomes, and which of these are predictors of success or failure on the test. In this edition of our blog, we will discuss the purpose of correlating PANCE scores with 1) number of C-grades; 2) number of students remediations and 3) preceptor ratings.
When a student receives a near-failing or failing grade in a class, remediation comes swiftly. Something is clearly wrong. But C-Grades, which technically imply “average” performance in a classroom setting, tell their own story. Correlation of C-grades to PANCE scores should indicate whether the number of C grades is statistically significantly correlated with PANCE scores. We have already seen how outstanding performance in certain classes can be strong indicator of PANCE success. If your correlation is statistically significant, a student who receives a C-grade might require entry into an academic improvement plan.
You can also look at the number of remediations and student performance, to determine whether the number of remediations is statistically significant or correlated with PANCE scores.
Remediation is meant to improve a student’s performance and return them to a more level playing field. A struggling student may require more than one remediation, and you might find yourself with those who require intervention repeatedly. When you see the fact that increased number of remediations leads to lower PANCE score, it should give you some pause to say, “Is the remediation process working?” If these often-remediated students are getting lower PANCE scores, if their performance is not up to their peers, there may be a need to look at the process itself.
Finally, you can look at preceptor evaluations. That regression might inform you that preceptor evaluations (which often are inflated) are not predictors of success. I have seen many cases where overall, a student’s preceptor rating in the clinical year was impressively high, but their cognitive component was not such that they were able to pass.
In the case below, I took a total aggregate of all preceptor evaluation scores for students that failed, versus students that passed. There was a very small percentage of a difference there. By breaking it down into the sum of different rotations, we might see student/preceptor comments that shed some light. In general, however, it is not particularly helpful.
We collected the following data from a certain program’s students who failed the PANCE, to shed light on their own perceptions of why they failed and how they might have better prepared themselves. Conducting such a survey or interview could demonstrate where your program might provide better preparation/coaching.
Here are some actionable items we discovered, based on the qualitative feedback from each of the students:
The program made the following modifications, related to student feedback:
Appendix 14’s Sections E and F relate mostly to PANCE scores. Enhancing your curriculum related to the PANCE-related data is based on multifaceted elements, as displayed in the following spider diagram. As you can see, there are all these elements to consider, and when you generate data-driven modifications you can connect them to a specific data point which then leads to an action plan, an area needing improvement. Then, you may follow up with your committee structure.
In our next blog we will begin examining Appendix 14G, in which the commission requires us to examine the sufficiency and effectiveness of principal and instructional faculty and staff.