The Script Concordance test (SCT) assesses clinical judgment. The purpose of this study was to determine whether a specialty-specific scoring key improves the validity of the SCT.
Thirty experts from 6 general surgery disciplines answered questions pertaining to their area of expertise. We created a scoring key of 5 amalgamated expert panel members. The answers of 227 general surgery residents were analyzed.
The optimized test had a reliability level (Cronbach a) of .81. Scores increased progressively throughout all levels of training, with R5s scoring higher than R4s (R1, 42.7 ± 7.1; R2, 47.6 ± 7.5; R3, 48.7 ± 6.7; R4, 49.8 ± 7.7; R5, 52.9 ± 9.3). The average score of juniors (R1s + R2s, 45.1 ± 7.6) was significantly lower (P