Skip header and navigation

Refine By

20 records – page 1 of 2.

Involvement in teaching improves learning in medical students: a randomized cross-over study.

https://arctichealth.org/en/permalink/ahliterature148931
Source
BMC Med Educ. 2009;9:55
Publication Type
Article
Date
2009
Author
Adam D Peets
Sylvain Coderre
Bruce Wright
Deirdre Jenkins
Kelly Burak
Shannon Leskosky
Kevin McLaughlin
Author Affiliation
Division of Critical Care Medicine, University of British Columbia, Vancouver, Canada. apeets@providencehealth.bc.ca
Source
BMC Med Educ. 2009;9:55
Date
2009
Language
English
Publication Type
Article
Keywords
Adult
Alberta
Cross-Over Studies
Curriculum
Education, Medical, Undergraduate
Educational Measurement
Educational Status
Faculty, Medical
Female
Humans
Learning
Male
Models, Educational
Peer Group
Schools, Medical
Students, Medical
Teaching
Abstract
Peer-assisted learning has many purported benefits including preparing students as educators, improving communication skills and reducing faculty teaching burden. But comparatively little is known about the effects of teaching on learning outcomes of peer educators in medical education.
One hundred and thirty-five first year medical students were randomly allocated to 11 small groups for the Gastroenterology/Hematology Course at the University of Calgary. For each of 22 sessions, two students were randomly selected from each group to be peer educators. Students were surveyed to estimate time spent preparing as peer educator versus group member. Students completed an end-of-course 94 question multiple choice exam. A paired t-test was used to compare performance on clinical presentations for which students were peer educators to those for which they were not.
Preparation time increased from a mean (SD) of 36 (33) minutes baseline to 99 (60) minutes when peer educators (Cohen's d = 1.3; p
Notes
Cites: Med Educ. 2000 Jan;34(1):23-910607275
Cites: Teach Learn Med. 2004 Winter;16(1):60-314987176
Cites: Med Educ. 1994 Jul;28(4):284-97861998
Cites: Acad Med. 1995 Mar;70(3):186-937873005
Cites: Med Teach. 2005 Sep;27(6):521-616199359
Cites: Med Teach. 2007 Sep;29(6):558-6517922358
Cites: Med Teach. 2007 Sep;29(6):527-4517978966
Cites: Med Teach. 2007 Sep;29(6):553-717978968
Cites: Med Teach. 2009 Apr;31(4):322-418937095
Cites: Med Teach. 2007 Sep;29(6):572-617917985
Cites: Med Teach. 2007 Sep;29(6):591-917922354
Cites: Teach Learn Med. 2007 Summer;19(3):216-2017594215
PubMed ID
19706190 View in PubMed
Less detail

The effect of differential rater function over time (DRIFT) on objective structured clinical examination ratings.

https://arctichealth.org/en/permalink/ahliterature148443
Source
Med Educ. 2009 Oct;43(10):989-92
Publication Type
Article
Date
Oct-2009
Author
Kevin McLaughlin
Martha Ainslie
Sylvain Coderre
Bruce Wright
Claudio Violato
Author Affiliation
Office of Undergraduate Medical Education, University of Calgary, Calgary, Alberta, Canada. kmclaugh@ucalgary.ca
Source
Med Educ. 2009 Oct;43(10):989-92
Date
Oct-2009
Language
English
Publication Type
Article
Keywords
Alberta
Clinical Competence - standards
Clinical Medicine - education
Education, Medical, Undergraduate - standards
Educational Measurement - methods - standards
Humans
Observer Variation
Time Factors
Abstract
Despite the impartiality implied in its title, the objective structured clinical examination (OSCE) is vulnerable to systematic biases, particularly those affecting raters' performance. In this study our aim was to examine OSCE ratings for evidence of differential rater function over time (DRIFT), and to explore potential causes of DRIFT.
We studied ratings for 14 internal medicine resident doctors over the course of a single formative OSCE, comprising 10 12-minute stations, each with a single rater. We evaluated the association between time-slot and rating for a station. We also explored a possible interaction between time-slot and station difficulty, which would support the hypothesis that rater fatigue causes DRIFT, and considered 'warm-up' as an alternative explanation for DRIFT by repeating our analysis after excluding the first two OSCE stations.
Time-slot was positively associated with rating on a station (regression coefficient 0.88, 95% confidence interval [CI] 0.38-1.38; P = 0.001). There was an interaction between time-slot and station difficulty: for the more difficult stations the regression coefficient for time-slot was 1.24 (95% CI 0.55-1.93; P = 0.001) compared with 0.52 (95% CI - 0.08 to 1.13; P = 0.09) for the less difficult stations. Removing the first two stations from our analyses did not correct DRIFT.
Systematic biases, such as DRIFT, may compromise internal validity in an OSCE. Further work is needed to confirm this finding and to explore whether DRIFT also affects ratings on summative OSCEs. If confirmed, the factors contributing to DRIFT, and ways to reduce these, should then be explored.
PubMed ID
19769648 View in PubMed
Less detail

Predicting performance on the Medical Council of Canada Qualifying Exam Part II.

https://arctichealth.org/en/permalink/ahliterature108748
Source
Teach Learn Med. 2013;25(3):237-41
Publication Type
Article
Date
2013
Author
Wayne Woloschuk
Kevin McLaughlin
Bruce Wright
Author Affiliation
Undergraduate Medical Education, University of Calgary, Calgary, Alberta, Canada. woloschu@ucalgary.ca
Source
Teach Learn Med. 2013;25(3):237-41
Date
2013
Language
English
Publication Type
Article
Keywords
Canada
Clinical Competence - standards
Education, Medical, Undergraduate - standards
Educational Measurement - methods
Educational Status
Female
Humans
Internship and Residency - standards
Male
Predictive value of tests
Abstract
Being able to predict which residents will likely be unsuccessful on high-stakes exams would allow residency programs to provide early intervention.
To determine whether measures of clinical performance in clerkship (in-training evaluation reports) and first year of residency (program director ratings) predict pass-fail performance on the Medical Council of Canada Qualifying Exam Part II (MCCQE Part II).
Residency program directors assessed the performance of our medical school graduates (Classes 2004-2007) at the end of the 1st postgraduate year. We subsequently collected clerkship in-training evaluation reports for these graduates. Using a neutral third party and unique codes, an anonymous dataset containing clerkship, residency, and MCCQE Part II performance scores was created for our use. Data were analyzed using descriptive statistics, correlations, receiver operating characteristics, and the Youdin index. Regression was also performed to further study the relationship among the variables.
Complete data were available for 78.6% of the graduates. Of these participants, 94% passed the licensing exam on their first attempt. Receiver operating characteristics revealed that the area under the curve for clerkship in-training evaluation reports was 0.67 (p
PubMed ID
23848331 View in PubMed
Less detail

Is undergraduate performance predictive of postgraduate performance?

https://arctichealth.org/en/permalink/ahliterature142726
Source
Teach Learn Med. 2010 Jul;22(3):202-4
Publication Type
Article
Date
Jul-2010
Author
Wayne Woloschuk
Kevin McLaughlin
Bruce Wright
Author Affiliation
University of Calgary, Calgary, Alberta, Canada. woloschu@ucalgary.ca
Source
Teach Learn Med. 2010 Jul;22(3):202-4
Date
Jul-2010
Language
English
Publication Type
Article
Keywords
Canada
Clinical Clerkship - statistics & numerical data
Clinical Competence
Education, Medical, Graduate - statistics & numerical data
Education, Medical, Undergraduate - statistics & numerical data
Educational Measurement
Educational Status
Humans
Predictive value of tests
Prospective Studies
Schools, Medical
Statistics as Topic
Students, Medical
Abstract
The continuity of undergraduate to postgraduate training suggests that performance in medical school should predict performance later in residency.
The goal is to determine whether undergraduate performance is predictive of postgraduate performance.
Residency program directors assessed the performance of medical school graduates (Classes 2004-2006) at the end of the 1st postgraduate year. Measures of undergraduate performance were retrieved including grade point averages, clerkship in-training evaluation reports, and the total score on the Medical Council of Canada Part 1 exam.
Complete data were available for 242 (81.5%) graduates. Postgraduate performance consisted of two reliable factors (clinical acumen and human sensitivity) that explained 78% of the variance. Correlations between undergraduate and the two postgraduate measures were low (.03-.31).
Measures of undergraduate performance appear to be poor predictors of performance in residency that consisted of two primary dimensions (clinical acumen and human sensitivity).
PubMed ID
20563941 View in PubMed
Less detail

Rater variables associated with ITER ratings.

https://arctichealth.org/en/permalink/ahliterature122759
Source
Adv Health Sci Educ Theory Pract. 2013 Oct;18(4):551-7
Publication Type
Article
Date
Oct-2013
Author
Michael Paget
Caren Wu
Joann McIlwrick
Wayne Woloschuk
Bruce Wright
Kevin McLaughlin
Author Affiliation
Office of Undergraduate Medical Education, Health Sciences Centre, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.
Source
Adv Health Sci Educ Theory Pract. 2013 Oct;18(4):551-7
Date
Oct-2013
Language
English
Publication Type
Article
Keywords
Alberta
Clinical Clerkship - organization & administration
Clinical Competence - standards
Competency-Based Education
Cross-Sectional Studies
Education, Medical, Undergraduate
Educational Measurement - methods - standards
Humans
Reproducibility of Results
Students, Medical
Abstract
Advocates of holistic assessment consider the ITER a more authentic way to assess performance. But this assessment format is subjective and, therefore, susceptible to rater bias. Here our objective was to study the association between rater variables and ITER ratings. In this observational study our participants were clerks at the University of Calgary and preceptors who completed online ITERs between February 2008 and July 2009. Our outcome variable was global rating on the ITER (rated 1-5), and we used a generalized estimating equation model to identify variables associated with this rating. Students were rated "above expected level" or "outstanding" on 66.4 % of 1050 online ITERs completed during the study period. Two rater variables attenuated ITER ratings: the log transformed time taken to complete the ITER [ß = -0.06, 95 % confidence interval (-0.10, -0.02), p = 0.002], and the number of ITERs that a preceptor completed over the time period of the study [ß = -0.008 (-0.02, -0.001), p = 0.02]. In this study we found evidence of leniency bias that resulted in two thirds of students being rated above expected level of performance. This leniency bias appeared to be attenuated by delay in ITER completion, and was also blunted in preceptors who rated more students. As all biases threaten the internal validity of the assessment process, further research is needed to confirm these and other sources of rater bias in ITER ratings, and to explore ways of limiting their impact.
PubMed ID
22777161 View in PubMed
Less detail

A prospective randomized trial of content expertise versus process expertise in small group teaching.

https://arctichealth.org/en/permalink/ahliterature140069
Source
BMC Med Educ. 2010;10:70
Publication Type
Article
Date
2010
Author
Adam D Peets
Lara Cooke
Bruce Wright
Sylvain Coderre
Kevin McLaughlin
Author Affiliation
Division of Critical Care Medicine, Centre for Health Education Scholarship and Centre for Health Evaluation and Outcome Sciences, University of British Columbia, Vancouver, Canada. apeets@providencehealth.bc.ca
Source
BMC Med Educ. 2010;10:70
Date
2010
Language
English
Publication Type
Article
Keywords
Alberta
Analysis of Variance
Clinical Competence
Curriculum
Education, Medical - methods
Educational Measurement
Educational Status
Faculty, Medical
Health Knowledge, Attitudes, Practice
Humans
Learning
Linear Models
Problem-Based Learning
Professional Competence
Prospective Studies
Students, Medical
Teaching
Abstract
Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment.
One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5), tutoring exclusively by physicians with process expertise (n = 3), and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3). After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ) examination.
Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD) on the EOC-MCQ exam was 76.1% (8.1) for groups taught by content experts, 78.2% (7.8) for the combination group and 79.5% (9.2) for process expert groups (p = 0.11). By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p
Notes
Cites: Acad Med. 1995 Aug;70(8):708-147646747
Cites: Acad Med. 1995 Mar;70(3):186-937873005
Cites: Med Educ. 1998 May;32(3):255-619743778
Cites: Proc R Soc Med. 1965 May;58:295-30014283879
Cites: Med Teach. 2009 Apr;31(4):322-418937095
Cites: Med Educ. 2001 Jan;35(1):22-611123591
Cites: Med Educ. 2003 Jan;37(1):6-1412535110
Cites: Acad Med. 2004 Oct;79(10 Suppl):S70-8115383395
Cites: Acad Med. 1991 May;66(5):298-3002025366
Cites: Acad Med. 1992 Jul;67(7):465-91616563
Cites: Acad Med. 1992 Jul;67(7):470-41616564
Cites: Acad Med. 1993 Oct;68(10):784-918397613
Cites: Acad Med. 1994 Aug;69(8):656-628054115
Cites: Acad Med. 1994 Aug;69(8):663-98054116
Cites: Acad Med. 1998 Jun;73(6):688-959653408
PubMed ID
20946674 View in PubMed
Less detail

Does blueprint publication affect students' perception of validity of the evaluation process?

https://arctichealth.org/en/permalink/ahliterature174653
Source
Adv Health Sci Educ Theory Pract. 2005;10(1):15-22
Publication Type
Article
Date
2005
Author
Kevin McLaughlin
Sylvain Coderre
Wayne Woloschuk
Henry Mandin
Author Affiliation
University of Calgary, Alberta, Canada. kevin.mclaughlin@calgaryhealthregion.ca
Source
Adv Health Sci Educ Theory Pract. 2005;10(1):15-22
Date
2005
Language
English
Publication Type
Article
Keywords
Alberta
Curriculum
Education, Medical, Undergraduate
Educational Measurement
Evaluation Studies as Topic
Humans
Personal Satisfaction
Reproducibility of Results
Students, Medical - psychology
Abstract
A major goal of any evaluation is to demonstrate content validity, which considers both curricular content as well as the ability expected of learners. Whether evaluation blueprints should be published and the degree of blueprint transparency is controversial.
To examine the effect of blueprint publication on students' perceptions of the validity of the evaluation process.
This study examined students' attitudes towards the Renal Course evaluation before and after blueprint publication. There was no significant change in the course objectives, blueprint or evaluation between the two time periods. Students' attitudes were evaluated using a questionnaire containing four items related to evaluation. Also collected were the overall course ratings, minimum performance level (MPL) for evaluations and students' performance on each exam.
There were no significant differences in the MPL or evaluation scores between the two time periods. A significantly greater proportion of students perceived that the Renal Course evaluation was a fair test and was reflective of both important subject matter and the delivered curriculum. The increased satisfaction process did not appear to be a reflection of their overall satisfaction with the course as there was a trend towards reduced overall satisfaction with the course.
Publication of the evaluation blueprint appears to improve students' perceptions of the validity of the evaluation process. Further studies are required to identify the reasons for this attitude change. We propose that blueprint transparency drives both instructors teaching and student learning towards key educational elements.
PubMed ID
15912281 View in PubMed
Less detail

Using a conceptual framework during learning attenuates the loss of expert-type knowledge structure.

https://arctichealth.org/en/permalink/ahliterature168280
Source
BMC Med Educ. 2006;6:37
Publication Type
Article
Date
2006
Author
Kerry Novak
Henry Mandin
Elizabeth Wilcox
Kevin McLaughlin
Author Affiliation
University of Calgary, Calgary, Alberta, Canada. knovok@ucalgary.ca
Source
BMC Med Educ. 2006;6:37
Date
2006
Language
English
Publication Type
Article
Keywords
Adult
Alberta
Alkalosis - diagnosis - physiopathology
Clinical Competence
Concept Formation
Education, Medical, Undergraduate - methods
Educational Measurement
Humans
Knowledge
Logistic Models
Memory
Nephrology
Problem-Based Learning
Prospective Studies
Psychology, Educational
Schools, Medical
Students, Medical - psychology
Time Factors
Abstract
During evolution from novice to expert, knowledge structure develops into an abridged network organized around pathophysiological concepts. The objectives of this study were to examine the change in knowledge structure in medical students in one year and to investigate the association between the use of a conceptual framework (diagnostic scheme) and long-term knowledge structure.
Medical students' knowledge structure of metabolic alkalosis was studied after instruction and one year later using concept-sorting. Knowledge structure was labeled 'expert-type' if students shared >or= 2 concepts with experts and 'novice-type' if they shared
Notes
Cites: JAMA. 2000 Sep 6;284(9):1105-1010974689
Cites: Med Teach. 2002 Jan;24(1):90-912098465
Cites: Acad Med. 2002 Aug;77(8):831-612176700
Cites: Adv Health Sci Educ Theory Pract. 2004;9(3):225-4015316273
Cites: Acad Med. 1990 Oct;65(10):611-212261032
Cites: Adv Health Sci Educ Theory Pract. 2007 Aug;12(3):265-7817072769
Cites: Mem Cognit. 1991 Nov;19(6):543-571758301
Cites: Acad Med. 1995 Mar;70(3):186-937873005
Cites: Acad Med. 1997 Mar;72(3):173-99075420
Cites: Acad Med. 1996 Sep;71(9):988-10019125988
Cites: Med Educ. 2005 Jan;39(1):107-1215612907
Cites: Acad Med. 1991 Sep;66(9 Suppl):S70-21930535
PubMed ID
16848903 View in PubMed
Less detail

The effect of gender interactions on students' physical examination ratings in objective structured clinical examination stations.

https://arctichealth.org/en/permalink/ahliterature140380
Source
Acad Med. 2010 Nov;85(11):1772-6
Publication Type
Article
Date
Nov-2010
Author
Julie A Carson
Adam Peets
Vincent Grant
Kevin McLaughlin
Author Affiliation
Department of Pathology and Laboratory Medicine, Faculty of Medicine, University of Calgary, Calgary, Alberta, Canada. julie.carson@cls.ab.ca
Source
Acad Med. 2010 Nov;85(11):1772-6
Date
Nov-2010
Language
English
Publication Type
Article
Keywords
Adult
British Columbia
Clinical Competence
Education, Medical, Undergraduate - standards
Educational Measurement - methods
Female
Humans
Linear Models
Male
Patient Simulation
Physical Examination - standards
Sex Factors
Students, Medical
Abstract
Previous studies have reached a variety of conclusions regarding the effect of gender on performance in objective structured clinical examinations (OSCEs). Most measured the effect on students' overall OSCE score. The authors of this study evaluated the effect of gender on the scores of specific physical examination OSCE stations, both "gender-sensitive" and "gender-neutral."
In 2008, the authors collected scores for 138 second-year medical students at the University of Calgary who underwent a seven-station OSCE. Two stations--precordial and respiratory exams--were considered gender-sensitive. Multiple linear regression was used to explore the effect of students', standardized patients' (SPs'), and raters' genders on the students' scores.
All 138 students (69 female) completed the OSCE and were included in the analyses. The mean scores (SD) for the two stations involving examination of the chest were higher for female than for male students (83.2% [15.5] versus 78.3% [15.8], respectively, d = 0.3, P = .009). There was a significant interaction between student and SP gender (P = .02). In the stratified analysis, female students were rated significantly higher than male students at stations with female SPs (85.4% [15.5] versus 76.6% [16.5], d = 0.6, P = .004) but not at stations with male SPs (80.2% [15.0] versus 80.0% [15.0], P = 1.0).
These results suggest student and SP genders interact to affect OSCE scores at stations that require examination of the chest. Further investigations are warranted to ensure that the OSCE is an equal experience for all students.
PubMed ID
20881825 View in PubMed
Less detail

Teaching in small portions dispersed over time enhances long-term knowledge retention.

https://arctichealth.org/en/permalink/ahliterature144959
Source
Med Teach. 2010;32(3):250-5
Publication Type
Article
Date
2010
Author
Maitreyi Raman
Kevin McLaughlin
Claudio Violato
Alaa Rostom
J P Allard
Sylvain Coderre
Author Affiliation
University of Calgary, Calgary, Alberta T2N4N1, Canada. mkothand@ucalgary.ca
Source
Med Teach. 2010;32(3):250-5
Date
2010
Language
English
Publication Type
Article
Keywords
Alberta
Analysis of Variance
Cognition
Curriculum
Educational Measurement
Faculty, Medical
Gastroenterology - education - statistics & numerical data
Humans
Knowledge
Learning
Nutrition Therapy
Ontario
Prospective Studies
Psychometrics
Schools, Medical
Teaching
Time Factors
Abstract
A primary goal of education is to promote long-term knowledge storage and retrieval.
A prospective interventional study design was used to investigate our research question: Does a dispersed curriculum promote better short- and long-term retention over a massed course?
Participants included 20 gastroenterology residents from the University of Calgary (N = 10) and University of Toronto (N = 10). Participants completed a baseline test of nutrition knowledge. The nutrition course was imparted to University of Calgary residents for 4 h occurring 1 h weekly over 4 consecutive weeks: dispersed delivery (DD). At the University of Toronto the course was taught in one 4h academic half-day: massed delivery (MD). Post-curriculum tests were administered at 1 week and 3 months to assess knowledge retention.
The baseline scores were 46.39 +/- 6.14% and 53.75 +/- 10.69% in the DD and MD groups, respectively. The 1 week post-test scores for the DD and MD groups were 81.67 +/- 8.57%, p
PubMed ID
20218841 View in PubMed
Less detail

20 records – page 1 of 2.