Skip header and navigation

Refine By

15 records – page 1 of 2.

Association between a medical school admission process using the multiple mini-interview and national licensing examination scores.

https://arctichealth.org/en/permalink/ahliterature118406
Source
JAMA. 2012 Dec 5;308(21):2233-40
Publication Type
Article
Date
Dec-5-2012
Author
Kevin W Eva
Harold I Reiter
Jack Rosenfeld
Kien Trinh
Timothy J Wood
Geoffrey R Norman
Author Affiliation
Department of Medicine, University of British Columbia, Vancouver, Canada. kevin.eva@ubc.ca
Source
JAMA. 2012 Dec 5;308(21):2233-40
Date
Dec-5-2012
Language
English
Publication Type
Article
Keywords
Cohort Studies
Education, Medical, Undergraduate - standards
Educational Measurement
Humans
Interviews as Topic
Licensure
Ontario
School Admission Criteria
Schools, Medical
Abstract
There has been difficulty designing medical school admissions processes that provide valid measurement of candidates' nonacademic qualities.
To determine whether students deemed acceptable through a revised admissions protocol using a 12-station multiple mini-interview (MMI) outperform others on the 2 parts of the Canadian national licensing examinations (Medical Council of Canada Qualifying Examination [MCCQE]). The MMI process requires candidates to rotate through brief sequential interviews with structured tasks and independent assessment within each interview.
Cohort study comparing potential medical students who were interviewed at McMaster University using an MMI in 2004 or 2005 and accepted (whether or not they matriculated at McMaster) with those who were interviewed and rejected but gained entry elsewhere. The computer-based MCCQE part I (aimed at assessing medical knowledge and clinical decision making) can be taken on graduation from medical school; MCCQE part II (involving simulated patient interactions testing various aspects of practice) is based on the objective structured clinical examination and typically completed 16 months into postgraduate training. Interviews were granted to 1071 candidates, and those who gained entry could feasibly complete both parts of their licensure examination between May 2007 and March 2011. Scores could be matched on the examinations for 751 (part I) and 623 (part II) interviewees.
Admissions decisions were made by combining z score transformations of scores assigned to autobiographical essays, grade point average, and MMI performance. Academic and nonacademic measures contributed equally to the final ranking.
Scores on MCCQE part I (standardized cut-score, 390 [SD, 100]) and part II (standardized mean, 500 [SD, 100]).
Candidates accepted by the admissions process had higher scores than those who were rejected for part I (mean total score, 531 [95% CI, 524-537] vs 515 [95% CI, 507-522]; P = .003) and for part II (mean total score, 563 [95% CI, 556-570] vs 544 [95% CI, 534-554]; P = .007). Among the accepted group, those who matriculated at McMaster did not outperform those who matriculated elsewhere for part I (mean total score, 524 [95% CI, 515-533] vs 546 [95% CI, 535-557]; P = .004) and for part II (mean total score, 557 [95% CI, 548-566] vs 582 [95% CI, 569-594]; P = .003).
Compared with students who were rejected by an admission process that used MMI assessment, students who were accepted scored higher on Canadian national licensing examinations.
Notes
Comment In: JAMA. 2013 Mar 20;309(11):1108-923512047
Comment In: JAMA. 2012 Dec 5;308(21):2250-123212504
Comment In: JAMA. 2013 Mar 20;309(11):110923512048
PubMed ID
23212501 View in PubMed
Less detail

Can the strength of candidates be discriminated based on ability to circumvent the biasing effect of prose? Implications for evaluation and education.

https://arctichealth.org/en/permalink/ahliterature183320
Source
Acad Med. 2003 Oct;78(10 Suppl):S78-81
Publication Type
Article
Date
Oct-2003
Author
Kevin W Eva
Timothy J Wood
Author Affiliation
McMaster University, Hamilton, ON, Canada. evakw@mcmaster.ca
Source
Acad Med. 2003 Oct;78(10 Suppl):S78-81
Date
Oct-2003
Language
English
Publication Type
Article
Keywords
Age Factors
Aptitude
Canada
Clinical Competence
Educational Measurement
Humans
Licensure, Medical
Terminology as Topic
Abstract
Residents have greater confidence in diagnoses when indicative features are presented in medical terminology. The current study examines the implications of this result by assessing its relationship to clinical ability.
Candidates writing the Medical Council of Canada's Qualifying Examination completed six questions in which the terminology used was manipulated. The influence of aptitude was examined by contrasting groups based on performance on the medicine section of Part I.
The difference between the candidates was greatest in the mixed conditions in which the features consistent with one diagnosis were presented in medicalese and those consistent with a second diagnosis were presented using lay terminology; weaker candidates were more biased by language than stronger candidates.
The results suggest that the language used in presenting case histories will influence the reliability of medical examinations. Furthermore, they suggest that weaker candidates might benefit from practice in making the translation between lay terminology and medicalese.
PubMed ID
14557103 View in PubMed
Less detail

Clinical practice guidelines in the intensive care unit: a survey of Canadian clinicians' attitudes.

https://arctichealth.org/en/permalink/ahliterature161579
Source
Can J Anaesth. 2007 Sep;54(9):728-36
Publication Type
Article
Date
Sep-2007
Author
Tasnim Sinuff
Kevin W Eva
Maureen Meade
Peter Dodek
Daren Heyland
Deborah Cook
Author Affiliation
Department of Critical Care Medicine, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, and University of Toronto, Ontario M4N 3M5, Canada. taz.sinuff@sunnybrook.ca
Source
Can J Anaesth. 2007 Sep;54(9):728-36
Date
Sep-2007
Language
English
Publication Type
Article
Keywords
Attitude of Health Personnel
Canada
Guideline Adherence - standards - statistics & numerical data
Health Care Surveys
Humans
Intensive Care - standards
Practice Guidelines as Topic
Abstract
To understand clinicians' perceptions regarding practice guidelines in Canadian intensive care units (ICUs) to inform guideline development and implementation strategies.
We developed a self-administered survey instrument and assessed its clinical sensibility and reliability. The survey was mailed to ICU physicians and nurses in Canada to determine local ICU guideline development and use, and to compare physicians' and nurses' attitudes and preferences towards guidelines.
The survey was completed by 51.6% (565/1095) of potential respondents. Although less than half reported a formal guideline development committee in their ICU, 81.0% reported that guidelines were developed at their institutions. Of clinicians who used guidelines in the ICU, 70.2% of nurses and 42.6% of physicians reported using them frequently or always. Professional society guidelines (with or without local modification) were reportedly used in most ICUs, but physicians were more confident than nurses of their validity (P
PubMed ID
17766740 View in PubMed
Less detail

Comparison of aboriginal and nonaboriginal applicants for admissions on the Multiple Mini-Interview using aboriginal and nonaboriginal interviewers.

https://arctichealth.org/en/permalink/ahliterature171515
Source
Teach Learn Med. 2006;18(1):58-61
Publication Type
Article
Date
2006
Author
Kristina Moreau
Harold Reiter
Kevin W Eva
Author Affiliation
Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada.
Source
Teach Learn Med. 2006;18(1):58-61
Date
2006
Language
English
Publication Type
Article
Keywords
Canada
College Admission Test
Educational Measurement
Humans
Indians, North American
Interviews as Topic - methods
Minority Groups
Prejudice
School Admission Criteria
Schools, Medical
Students, Medical
Abstract
Achievement on grade point average and Medical College Admissions Test contribute as unintentional barriers to advancement of underrepresented minorities. So long as noncognitive measures mimic random number generators, they merely perpetuate such discrepancies. As reliable noncognitive measures are developed, it is crucial to ensure immunity from bias, enabling them to better dilute unintended discrimination of cognitive measures.
The Multiple Mini-Interview (MMI) is a recently developed, reliable (overall reliability = .70), noncognitive measure used for assessment of medical school applicants. Our purpose in this study was to evaluate whether any suggestion of bias existed in application of the MMI in its assessment of aboriginal medical school applicants.
In this study of the MMI (overall reliability = .70), each of 5 self-declared aboriginal applicants and 7 general-pool applicants experienced the same 11 vetted interview stations with the same 6 aboriginal raters and 5 nonaboriginal raters.
The Interviewer Type x Interviewee Type interaction was nonsignificant, p > .7.
Based on the results of this study, it is recommended that MMI stations be vetted by aboriginally sensitive personnel, but neither aboriginal-specific rater training nor aboriginal rater assignment is required to ensure a level playing field for the assessment of applicants' personal qualities.
PubMed ID
16354142 View in PubMed
Less detail

Do clinical clerks provide candidates with adequate formative assessment during Objective Structured Clinical Examinations?

https://arctichealth.org/en/permalink/ahliterature178793
Source
Adv Health Sci Educ Theory Pract. 2004;9(3):189-99
Publication Type
Article
Date
2004
Author
Harold I Reiter
Jack Rosenfeld
Kiruthiga Nandagopal
Kevin W Eva
Author Affiliation
McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1.
Source
Adv Health Sci Educ Theory Pract. 2004;9(3):189-99
Date
2004
Language
English
Publication Type
Article
Keywords
Adult
Clinical Clerkship - standards
Education, Medical, Undergraduate - methods
Educational Measurement - methods
Humans
Knowledge of Results (Psychology)
Medical History Taking
Ontario
Patient Simulation
Physician-Patient Relations
Pilot Projects
Program Development
Quality Control
Questionnaires
Reproducibility of Results
Teaching - methods
Abstract
Various research studies have examined the question of whether expert or non-expert raters, faculty or students, evaluators or standardized patients, give more reliable and valid summative assessments of performance on Objective Structured Clinical Examinations (OSCEs). Less studied has been the question of whether or not non-faculty raters can provide formative feedback that allows students to take advantage of the educational opportunity that OSCEs provide. This question is becoming increasingly important, however, as the strain on faculty resources increases.
A questionnaire was developed to assess the quality of feedback that medical examiners provide during OSCEs. It was pilot tested for reliability using video recordings of OSCE performances. The questionnaires were then used to evaluate the feedback given during an actual OSCE in which clinical clerks, residents, and faculty were used as examiners on two randomly selected test stations.
The inter-rater reliability of the 19-item feedback questionnaire was 0.69 during the pilot test. The internal consistency was found to be 0.90 during pilot testing and 0.95 in the real OSCE. Using this form, the feedback ratings assigned to clinical clerks were significantly greater than those assigned to faculty evaluators. Furthermore, performance on the same OSCE stations eight months later was not impaired by having been evaluated by student examiners.
While evidence of mark inflation within the clinical clerk examiners should be addressed with examiner training, the current results suggest that clerks are capable of giving adequate formative feedback to more junior colleagues.
PubMed ID
15316270 View in PubMed
Less detail

Factors predicting competence as assessed with the written component of the Canadian Physiotherapy Competency Examination.

https://arctichealth.org/en/permalink/ahliterature146125
Source
Physiother Theory Pract. 2010 Jan;26(1):12-21
Publication Type
Article
Date
Jan-2010
Author
Patricia A Miller
M Alison Cooper
Kevin W Eva
Author Affiliation
School of Rehabilitation Science, McMaster University, Hamilton, Ontario, Canada. pmiller@mcmaster.ca
Source
Physiother Theory Pract. 2010 Jan;26(1):12-21
Date
Jan-2010
Language
English
Publication Type
Article
Keywords
Canada
Clinical Competence
Databases as Topic
Educational Measurement
Humans
Licensure
Physical Therapy Specialty - education
Regression Analysis
Task Performance and Analysis
Time Factors
Writing
Abstract
Little is known about the predictors of success on the written component of the Physiotherapy Competency Examination (PCE), the requirement for licensure in most Canadian jurisdictions. The purpose of this study was to examine the relationship between educational factors and the performance of Canadian educated physical therapists (CEPTs) and internationally educated physical therapists (IEPTs). An anonymized database composed of 24 sittings of the examination from the years 2001 to 2004 was used. Pearson correlation analyses and regression analyses were conducted to examine the relationships between educational factors and scores. ANOVA was used to compare differences in scores between candidate groups. CEPTs, first-time writers, and candidates writing in their year of graduation had the highest pass rates. The performance of both CEPTS and IEPTs does not appear to decline for any candidate writing beyond the first year post-graduation. The novel finding that the performance of candidates did not decline with increasing years postgraduation warrants further study. Other future research initiatives should include additional demographic and educational factors and address the relationship between performance on both components of the PCE and actual clinical practice.
PubMed ID
20067350 View in PubMed
Less detail

Global rating scale for the assessment of paramedic clinical competence.

https://arctichealth.org/en/permalink/ahliterature122232
Source
Prehosp Emerg Care. 2013 Jan-Mar;17(1):57-67
Publication Type
Article
Author
Walter Tavares
Sylvain Boet
Rob Theriault
Tony Mallette
Kevin W Eva
Author Affiliation
Centennial College Paramedic Program, Toronto, Ontario, Canada. wtavares@centennialcollege.ca
Source
Prehosp Emerg Care. 2013 Jan-Mar;17(1):57-67
Language
English
Publication Type
Article
Keywords
Analysis of Variance
Clinical Competence - standards
Delphi Technique
Educational Measurement - methods - standards
Emergency Medical Technicians - education - standards
Female
Focus Groups
Humans
Male
Observer Variation
Ontario
Reproducibility of Results
Task Performance and Analysis
Video Recording
Abstract
The aim of this study was to develop and critically appraise a global rating scale (GRS) for the assessment of individual paramedic clinical competence at the entry-to-practice level.
The development phase of this study involved task analysis by experts, contributions from a focus group, and a modified Delphi process using a national expert panel to establish evidence of content validity. The critical appraisal phase had two raters apply the GRS, developed in the first phase, to a series of sample performances from three groups: novice paramedic students (group 1), paramedic students at the entry-to-practice level (group 2), and experienced paramedics (group 3). Using data from this process, we examined the tool's reliability within each group and tested the discriminative validity hypothesis that higher scores would be associated with higher levels of training and experience.
The development phase resulted in a seven-dimension, seven-point adjectival GRS. The two independent blinded raters scored 81 recorded sample performances (n = 25 in group 1, n = 33 in group 2, n = 23 in group 3) using the GRS. For groups 1, 2, and 3, respectively, interrater reliability reached 0.75, 0.88, and 0.94. Intrarater reliability reached 0.94 and the internal consistency ranged from 0.53 to 0.89. Rater differences contributed 0-5.7% of the total variance. The GRS scores assigned to each group increased with level of experience, both using the overall rating (means = 2.3, 4.1, 5.0; p
PubMed ID
22834959 View in PubMed
Less detail

Medical school admissions: revisiting the veracity and independence of completion of an autobiographical screening tool.

https://arctichealth.org/en/permalink/ahliterature160803
Source
Acad Med. 2007 Oct;82(10 Suppl):S8-S11
Publication Type
Article
Date
Oct-2007
Author
Mark D Hanson
Kelly L Dore
Harold I Reiter
Kevin W Eva
Author Affiliation
HSC 3G41, McMaster University, Hamilton, ON L8N 3Z5, Canada. hansonm@mcmaster.ca
Source
Acad Med. 2007 Oct;82(10 Suppl):S8-S11
Date
Oct-2007
Language
English
Publication Type
Article
Keywords
Autobiography as Topic
Confidentiality
Humans
Interviews as Topic - standards
Ontario
Retrospective Studies
School Admission Criteria - trends
Schools, Medical - organization & administration
Task Performance and Analysis
Abstract
Some form of candidate-written autobiographical submission (ABS) is commonly used before interviews to screen candidates to medical school on the basis of their noncognitive characteristics. However, confidence in the validity of these measures has been questioned.
In 2005, applicants to McMaster University completed an off-site ABS before being interviewed and an on-site ABS at interview. Five off-site ABS questions were completed, plus eight on-site questions. On-site ABS questions were answered in variable timing conditions. ABS ratings were compared across sites and time allowed for completion.
Off-site ABS ratings were higher than on-site ratings, and the two sets of ratings were uncorrelated with one another. On-site ABS ratings increased with increased time allowed for completion, but the reliability of the measure was unaffected by this variable.
Confidence that candidates independently answer preinterview ABS questions is weak. To improve ABS validity, modification of the current Web-based submission format warrants consideration.
PubMed ID
17895698 View in PubMed
Less detail

Multiple mini-interviews predict clerkship and licensing examination performance.

https://arctichealth.org/en/permalink/ahliterature164192
Source
Med Educ. 2007 Apr;41(4):378-84
Publication Type
Article
Date
Apr-2007
Author
Harold I Reiter
Kevin W Eva
Jack Rosenfeld
Geoffrey R Norman
Author Affiliation
Department of Oncology, McMaster University, Hamilton, Ontario, Canada.
Source
Med Educ. 2007 Apr;41(4):378-84
Date
Apr-2007
Language
English
Publication Type
Article
Keywords
Clinical Clerkship - standards
Clinical Competence - standards
Humans
Interviews as Topic
Licensure, Medical
Ontario
School Admission Criteria
Abstract
The Multiple Mini-Interview (MMI) has previously been shown to have a positive correlation with early medical school performance. Data have matured to allow comparison with clerkship evaluations and national licensing examinations.
Of 117 applicants to the Michael G DeGroote School of Medicine at McMaster University who had scores on the MMI, traditional non-cognitive measures, and undergraduate grade point average (uGPA), 45 were admitted and followed through clerkship evaluations and Part I of the Medical Council of Canada Qualifying Examination (MCCQE). Clerkship evaluations consisted of clerkship summary ratings, a clerkship objective structured clinical examination (OSCE), and progress test score (a 180-item, multiple-choice test). The MCCQE includes subsections relevant to medical specialties and relevant to broader legal and ethical issues (Population Health and the Considerations of the Legal, Ethical and Organisational Aspects of Medicine[CLEO/PHELO]).
In-programme, MMI was the best predictor of OSCE performance, clerkship encounter cards, and clerkship performance ratings. On the MCCQE Part I, MMI significantly predicted CLEO/PHELO scores and clinical decision-making (CDM) scores. None of these assessments were predicted by other non-cognitive admissions measures or uGPA. Only uGPA predicted progress test scores and the MCQ-based specialty-specific subsections of the MCCQE Part I.
The MMI complements pre-admission cognitive measures to predict performance outcomes during clerkship and on the Canadian national licensing examination.
PubMed ID
17430283 View in PubMed
Less detail

Predictive validity of the multiple mini-interview for selecting medical trainees.

https://arctichealth.org/en/permalink/ahliterature149279
Source
Med Educ. 2009 Aug;43(8):767-75
Publication Type
Article
Date
Aug-2009
Author
Kevin W Eva
Harold I Reiter
Kien Trinh
Parveen Wasi
Jack Rosenfeld
Geoffrey R Norman
Author Affiliation
Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada. evakw@mcmaster.ca
Source
Med Educ. 2009 Aug;43(8):767-75
Date
Aug-2009
Language
English
Publication Type
Article
Keywords
Adult
Clinical Competence - standards
Cognition
Education, Medical, Undergraduate
Educational Measurement - methods
Female
Humans
Male
Ontario
Reproducibility of Results
School Admission Criteria
Statistics as Topic
Students, Medical - psychology
Young Adult
Abstract
In this paper we report on further tests of the validity of the multiple mini-interview (MMI) selection process, comparing MMI scores with those achieved on a national high-stakes clinical skills examination. We also continue to explore the stability of candidate performance and the extent to which so-called 'cognitive' and 'non-cognitive' qualities should be deemed independent of one another.
To examine predictive validity, MMI data were matched with licensing examination data for both undergraduate (n = 34) and postgraduate (n = 22) samples of participants. To assess the stability of candidate performance, reliability coefficients were generated for eight distinct samples. Finally, correlations were calculated between 'cognitive' and 'non-cognitive' measures of ability collected in the admissions procedure, on graduation from medical school and 18 months into postgraduate training.
The median reliability of eight administrations of the MMI in various cohorts was 0.73 when 12 10-minute stations were used with one examiner per station. The correlation between performance on the MMI and number of stations passed on an objective structured clinical examination-based licensing examination was r = 0.43 (P
PubMed ID
19659490 View in PubMed
Less detail

15 records – page 1 of 2.