Shared learning activities aim to enhance the collaborative skills of health students and professionals in relation to both colleagues and patients. The Readiness for Interprofessional Learning Scale is used to assess such skills. The aim of this study was to validate a Danish four-subscale version of the RIPLS in a sample of 370 health-care students and 200 health professionals.
The questionnaire was translated following a two-step process, including forward and backward translations, and a pilot test. A test of internal consistency and a test-retest of reliability were performed using a web-based questionnaire.
The questionnaire was completed by 370 health care students and 200 health professionals (test) whereas the retest was completed by 203 health professionals. A full data set of first-time responses was generated from the 570 students and professionals at baseline (test). Good internal association was found between items in Positive Professional Identity (Q13-Q16), with factor loadings between 0.61 and 0.72. The confirmatory factor analyses revealed 11 items with factor loadings above 0.50, 18 below 0.50, and no items below 0.20. Weighted kappa values were between 0.20 and 0.40, 16 items with values between 0.40 and 0.60, and six items between 0.60 and 0.80; all showing p-values below 0.001.
Strong internal consistency was found for both populations. The Danish RIPLS proved a stable and reliable instrument for the Teamwork and Collaboration, Negative Professional Identity, and Positive Professional Identity subscales, while the Roles and Responsibility subscale showed some limitations. The reason behind these limitations is unclear.
A written test of clinical decision-making, the Key Features Examination, was developed for use in clerkship.
Following the guidelines provided by the Medical Council of Canada, a Key Features Examination was developed and implemented in an internal medicine clinical clerkship, during the 1998/99 clerkship year. The reliability and concurrent validity of the exam were assessed.
A 2 hour examination, containing 15 key feature problems, was administered to 101 students during 6 consecutive internal medicine clerkship rotations. The reliability of the exam, calculated from Cronbach's alpha, was 0.49. The exam had modest correlation with other measures of knowledge and clinical performance.
The Key Feature Examination is a feasible and reliable evaluation tool that may be implemented as a component of student assessment during a clinical clerkship.
There is a growing need for patient education and an evaluation of its outcomes.
The aim of this study was to compare ambulatory orthopaedic surgery patients' knowledge with Internet-based education and face-to-face education with a nurse. The following hypothesis was proposed: Internet-based patient education (experiment) is as effective as face-to-face education with a nurse (control) in increasing patients' level of knowledge and sufficiency of knowledge. In addition, the correlations of demographic variables were tested.
The patients were randomized to either an experiment group (n = 72) or a control group (n = 75). Empirical data were collected with two instruments.
Patients in both groups showed improvement in their knowledge during their care. Patients in the experiment group improved their knowledge level significantly more in total than those patients in the control group. There were no differences in patients' sufficiency of knowledge between the groups. Knowledge was correlated especially with patients' age, gender and earlier ambulatory surgeries.
As a conclusion, positive results concerning patients' knowledge could be achieved with the Internet-based education. The Internet is a viable method in ambulatory care.
To obtain, in a survey-based study, detailed information on the faculty currently responsible for teaching radiation biology courses to radiation oncology residents in the United States and Canada.
In March-December 2007 a survey questionnaire was sent to faculty having primary responsibility for teaching radiation biology to residents in 93 radiation oncology residency programs in the United States and Canada.
The responses to this survey document the aging of the faculty who have primary responsibility for teaching radiation biology to radiation oncology residents. The survey found a dramatic decline with time in the percentage of educators whose graduate training was in radiation biology. A significant number of the educators responsible for teaching radiation biology were not fully acquainted with the radiation sciences, either through training or practical application. In addition, many were unfamiliar with some of the organizations setting policies and requirements for resident education. Freely available tools, such as the American Society for Radiation Oncology (ASTRO) Radiation and Cancer Biology Practice Examination and Study Guides, were widely used by residents and educators. Consolidation of resident courses or use of a national radiation biology review course was viewed as unlikely by most programs.
A high priority should be given to the development of comprehensive teaching tools to assist those individuals who have responsibility for teaching radiation biology courses but who do not have an extensive background in critical areas of radiobiology related to radiation oncology. These findings also suggest a need for new graduate programs in radiobiology.
Cites: Int J Radiat Oncol Biol Phys. 1992;24(5):847-91447014
Cites: Int J Radiat Oncol Biol Phys. 1996 Jul 1;35(4):821-68690652
Cites: Int J Radiat Oncol Biol Phys. 1999 Aug 1;45(1):153-6110477019
Written daily reflections during clinical practice on birthing units have been used during several years in midwifery education at Lund University, Sweden. However, the usefulness of these reflections for evaluation of progression in learning and professional development of students has to date not been evaluated. In order to analyse written reflections, two taxonomies developed by Bloom and Pettersen have been applied to the texts. Progression in the professional development of midwifery students can be seen through levels of complexity in cognitive and psycho-motor learning areas and also in the description of learning situations. Progression can be seen from a basic description of facts in simple situations at the beginning of the students' practice to a complex description of complicated situations towards the end of the practice. Written daily reflections appear to be a suitable method to help students to reflect in a structured way, thereby helping their professional development. Reflections can help clinical supervisors to understand the needs of the individual student and to support their knowledge accruement. Daily written reflections on clinical practice can be of use in other health education programs.
There are several reasons for using global ratings in addition to checklists for scoring objective structured clinical examination (OSCE) stations. However, there has been little evidence collected regarding the validity of these scales. This study assessed the construct validity of an analytic global rating with 4 component subscales: empathy, coherence, verbal and non-verbal expression.
A total of 19 Year 3 and 38 Year 4 clinical clerks were scored on content checklists and these global ratings during a 10-station OSCE. T-tests were used to assess differences between groups for overall checklist and global scores, and for each of the 4 subscales.
The mean global rating was significantly higher for senior clerks (75.5% versus 71.3%, t55 = 2.12, P
The validity and reliability of high-stakes examinations such as those used by national, regional and state or provincial dental boards are under intense scrutiny by candidates, dental schools, dental educators, dental associations, and state or provincial dental boards.
The authors followed the progress of 1,063 candidates from nonaccredited dental programs who began the National Dental Examining Board of Canada's (NDEB) clinical examinations between January 1996 and November 1999 through the administration of the examination's final component in December 2003 examine the utility and validity of the patient-based component of the examination process.
The results showed that the first three components of the examination were effective in screening out candidates who were not adequately prepared to take the patient-based component. Only 12 (1.1 percent) of the candidates failed the maximum allowed three attempts to pass the patient-based component.
The results demonstrated that the patient-based component did not contribute to the overall examination validity or decision making and did not prevent candidates from obtaining certification.
Owing to this lack of utility, the associated costs and ethical concerns, NDEB eliminated the patient-based component of the examination and replaced it with the requirement to complete an accredited, qualifying/degree completion dental program followed by completion of NDEB's written and objective structured clinical examination components.
This paper reports a study to develop further the existing assessment form and to capture new aspects of assessment for the nursing profession of the future for inclusion in the form.
Since nursing education became part of the higher education system, the assessment of clinical periods of the programme has become more complicated and requirements are more demanding. Changes in the health care sector, such as demographic changes and shorter hospitalization, create demands upon the independent nursing role of the future. Many educational documents, such as an assessment form, must continuously be up-dated and adapted to changes in society.
A Delphi study concerning the content of this assessment form was carried out using two rounds. Through this process, an expert panel gave their opinions about the form and possible changes to it.
There was general acceptance of the content in the current assessment form. Suggested changes were the addition of two factors concerning collaboration with the family and society, and development of the student's independence. Two new area headings were suggested: one about ability to use the nursing process, and the other about development of a professional stance.
The suggested changes in the assessment form match expected changes in the health care sector and the demands of an academic nursing education.
Minimal pain content has been documented in pre-licensure curricula and students lack important pain knowledge at graduation. To address this problem, we have implemented and evaluated a mandatory Interfaculty Pain Curriculum (IPC) yearly since 2002 for students (N=817 in 2007) from six Health Science Faculties/Departments. The 20-h pain curriculum continues to involve students from Dentistry, Medicine, Nursing, Pharmacy, Physical Therapy, and Occupational Therapy as part of their 2nd or 3rd year program. Evaluation methods based on Kirkpatrick's model now include evaluation of a Comprehensive Pain Management Plan along with the previously used Pain Knowledge and Beliefs Questionnaire (PKPQ) and Daily Content and Process Questionnaires (DCPQ). Important lessons have been learned and subsequent changes made in this iterative curriculum design based on extensive evaluation over the 6-year period. Modifications have included case development more relevant to the diverse student groups, learning contexts that are uni-, inter-, and multi-professional, and facilitator development in working with interprofessional student groups. PKBQ scores have improved in all years with a statistically significant average change on correct responses from 14% to 17%. The DCPQ responses have also indicated consistently that most students (85-95%) rated highly the patient panel, expert-lead clinically focused sessions, and small interprofessional groups. Relevancy and organization of the information presented have been generally rated highly from 80.3% to 91.2%. This curriculum continues to be a unique and valuable learning opportunity as we utilize lessons learned from extensive evaluation to move the pain agenda forward with pre-licensure health science students.