It is a well-known fact that experience is important for safe driving. Previously, this presented a problem since experience was mostly gained during the most dangerous period of driving-the first years with a licence. In many countries, this "experience paradox" has been addressed by providing increased opportunities to gain experience through supervised practice. One question, however, which still needs to be answered is what has been lost and what has been gained through supervised practice. Does this method lead to fewer accidents after licensing and/or has the number of accidents in driving practice increased? There were three aims in the study. The first was to calculate the size of the accident problem in terms of the number of accidents, health risk and accident risk during practising. The second aim was to evaluate the solution of the "experience paradox" that supervised practice suggests by calculating the costs in terms of accidents during driving practice and the benefits in terms of reduced accident involvement after obtaining a licence. The third aim was to analyse conflict types that occur during driving practice. National register data on licence holders and police-reported injury accidents and self-reported exposure were used. The results show that during the period 1994-2000, 444 driving practice injury accidents were registered, compared to 13657 accidents during the first 2 years with a licence. The health risk during the period after licensing was 33 times higher and the accident risk 10 times higher than the corresponding risk during practice. The cost-benefit analysis showed that the benefits in terms of accident reduction after licensing were 30 times higher than the costs in terms of driving practice accidents. It is recommended that measures to reduce such accidents should focus on better education of the lay instructor, but not on introducing measures to reduce the amount of lay-instructed practice.
AIM: To test the feasibility of the Amsterdam memory and attention training for children (Amat-c) in Swedish children with acquired brain damage. METHODS: Amat-c consists of structured exercises in specific attention and memory techniques. Three Swedish children aged 9-16 y with acquired brain injuries and related memory and attention deficits trained with the Amat-c method for half an hour a day in school or at home interactively with a teacher or parent for a period of 20 wk. RESULTS: All children and their coaches completed the training without interruption. The results showed an improvement in several neuropsychological tests of sustained and selective attention as well as in memory performance. Questionnaires filled in by parents and teachers indicate that, using the Amat-c method, the children learned strategies that improved their school achievement and self-image. CONCLUSIONS: The Amat-c is a valuable treatment option for improving cognitive efficiency in children with acquired brain injuries. The results indicate improved performance in several psychometric measurements. On the basis of these results, the second step will be to modify the complexity and duration of the method, as well as to integrate a reward system before further evaluating the efficacy in a larger controlled study.
To compare the performance of depressed patients to healthy control subjects on discrete cognitive domains derived from factor analysis and to examine the factors that may influence the performance of depressed patients on cognitive domains in a large sample.
We compared the cognitive performance of 149 patients with major depression to 104 healthy control subjects using multivariate ANCOVA. We used principal component factor analysis to group the cognitive variables into cognitive domains. Finally, we conducted regression analysis to examine the contribution of predictor factors to the cognitive domains that were impaired in the depressed group.
Verbal memory and speed of processing were impaired in depressed patients, compared with healthy control subjects. Patient IQ, duration of depressive illness, and number of hospitalizations significantly contributed to the performance of patients on verbal memory and speed of processing. The severity of mood symptoms did not correlate with performance on any cognitive domain.
Understanding the factors that predict cognitive performance of patients with depression may provide an insight into the processes by which depression leads to cognitive dysfunction. Our study showed that premorbid IQ and factors related to burden of illness are strong independent predictors of cognitive dysfunction in patients with major depression.
A large-scale disaster exercise was conducted to assess how one large community would handle such a situation - particularly, how it would deal with 150 casualties. The planning, undertaken by a subcommittee composed of representatives of all resource groups in the city, took more than a year. The deficiencies of the disaster plan detected during the exercise, which included a lack of trained personnel and various problems of communication, are now being corrected.
Cites: Hospitals. 1970 Mar 1;44(5):40-2 passim5414575
Laboratory response networks (LRNs) have been established for security reasons in several countries including the Netherlands, France, and Sweden. LRNs function in these countries as a preparedness measure for a coordinated diagnostic response capability in case of a bioterrorism incident or other biocrimes. Generally, these LRNs are organized on a national level. The EU project AniBioThreat has identified the need for an integrated European LRN to strengthen preparedness against animal bioterrorism. One task of the AniBioThreat project is to suggest a plan to implement laboratory biorisk management CWA 15793:2011 (CWA 15793), a management system built on the principle of continual improvement through the Plan-Do-Check-Act (PDCA) cycle. The implementation of CWA 15793 can facilitate trust and credibility in a future European LRN and is an assurance that the work done at the laboratories is performed in a structured way with continuous improvements. As a first step, a gap analysis was performed to establish the current compliance status of biosafety and laboratory biosecurity management with CWA 15793 in 5 AniBioThreat partner institutes in France (ANSES), the Netherlands (CVI and RIVM), and Sweden (SMI and SVA). All 5 partners are national and/or international laboratory reference institutes in the field of public or animal health and possess high-containment laboratories and animal facilities. The gap analysis showed that the participating institutes already have robust biorisk management programs in place, but several gaps were identified that need to be addressed. Despite differences between the participating institutes in their compliance status, these variations are not significant. Biorisk management exercises also have been identified as a useful tool to control compliance status and thereby implementation of CWA 15793. An exercise concerning an insider threat and loss of a biological agent was performed at SVA in the AniBioThreat project to evaluate implementation of the contingency plans and as an activity in the implementation process of CWA 15793. The outcome of the exercise was perceived as very useful, and improvements to enhance biorisk preparedness were identified. Gap analyses and exercises are important, useful activities to facilitate implementation of CWA 15793. The PDCA cycle will enforce a structured way to work, with continual improvements concerning biorisk management activities. Based on the activities in the AniBioThreat project, the following requirements are suggested to promote implementation: support from the top management of the organizations, knowledge about CWA 15793, a compliance audit checklist and gap analysis, training and exercises, networking in LRNs and other networks, and interinstitutional audits. Implementation of CWA 15793 at each institute would strengthen the European animal bioterrorism response capabilities by establishing a well-prepared LRN.
The effects of psychological workload on inflight heart rate were studied in five experienced (flight instructors) and five less experienced (cadets) military pilots of the Finnish Air Force (FAF).
The subjects performed the same flight mission twice; first with the BA Hawk MK 51 simulator with minimal G-forces and after that with the BA Hawk MK 51 jet trainer with Gz-forces below +2. The mission included: a) 2 min rest after seating; b) take-off; c) ILS approach in the minimum weather conditions (initial, intermediate and final approach); d) landing tour (visual approach); and e) landing. The heart rates were continuously measured using a small portable recorder developed at the University of Jyv?skyl?, Finland. The R-R intervals were stored and analyzed with an accuracy of 1 ms. The different phases of each flight were marked in the data by using codes given beforehand for each critical event.
The take-off resulted in a significant increase in the heart rate from the resting levels both in the cadets and the flight instructors in both planes. In the simulator the heart rate decreased during the initial approach and slightly increased after it during the intermediate approach. Thereafter the heart rate decreased during the landing tour which seemed to be the least psychologically demanding phase of the simulated flight. The heart rate increased again during the landing but did not exceed the heart rates measured during the take-off and the ILS-approach. There were no statistical differences between the groups. In the jet trainer no decrease in the heart rate could be observed immediately after the take-off, unlike in the case of the simulated flight. The inflight heart rate increased during the final approach, decreased during the landing tour and finally increased during the landing. According to the heart rate analysis the final approach was the most loaded phase of the real flight. The changes towards the phases of final approach and landing were greater among the flight instructors.
There were no statistically significant differences between the mean heart rates during the real and the simulated flight. It is suggested that the heart rate changes for most reflected the changes in cognitive workload.
To demonstrate how learning curves can describe proficiency improvements associated with deliberate practice of radiograph interpretation.
This was a prospective, cross-sectional study of pediatric residents in two tertiary care programs. A 234-item digital case bank of pediatric ankle radiographs was developed. The authors gave participants a brief clinical summary of each case and asked them to consider three radiograph views of the ankle. Participants classified each case as either normal or abnormal and, if applicable, specified the location of the abnormality. They received immediate feedback and a radiologist's dictated report. The authors reviewed longitudinal learning curves, which were generated based on calculated test characteristics (e.g., accuracy, sensitivity, specificity).
Eighteen participants (56.3% of those eligible) completed all 234 cases. The form of the participants' learning curves was similar across all test characteristics. The curves showed a period of "noise" until the participants completed an average of 20 cases. The slope of the learning curve was maximal from 21 to 50 cases during which cumulative sensitivity (95% CI) increased from 0.50 (0.45, 0.57) to 0.54 (0.47, 0.58). Then, the curves reached an inflection point after which learning slowed but did not stop even after 234 cases. The final cumulative sensitivity was 0.60 (0.54, 0.63). Applying a reference criterion, the authors classified learners into formative categories.
Learning curves describing deliberate practice of radiograph interpretation allow medical educators to define at which point(s) practice is most efficient and how much practice is required to achieve a defined level of mastery.
To conduct a process evaluation of the implementation of an ergonomics training program aimed at increasing the use of loading assist devices in flight baggage handling.
Feasibility related to the process items recruitment, reach, context, dose delivered (training time and content); dose received (participants' engagement); satisfaction with training; intermediate outcomes (skills, confidence and behaviors); and barriers and facilitators of the training intervention were assessed by qualitative and quantitative methods.
Implementation proved successful regarding dose delivered, dose received and satisfaction. Confidence among participants in the training program in using and talking about devices, observed use of devices among colleagues, and internal feedback on work behavior increased significantly (p
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The prevalence increases with increasing age. In middle-aged men, endurance sport practice is associated with increased risk of AF but there are few studies among elderly people. The aim of this study was to investigate the role of long-term endurance sport practice as a risk factor for AF in elderly men. A cross-sectional study compared 509 men aged 65-90 years who participated in a long-distance cross-country ski race with 1768 men aged 65-87 years from the general population. Long-term endurance sport practice was the main exposure. Self-reported AF and covariates were assessed by questionnaires. Risk differences (RDs) for AF were estimated by using a linear regression model. After multivariable adjustment, a history of endurance sport practice gave an added risk for AF of 6.0 percent points (pp) (95% confidence interval 0.8-11.1). Light and moderate leisure-time physical activity during the last 12 months reduced the risk with 3.7 and 4.3 pp, respectively, but the RDs were not statistically significant. This study suggests that elderly men with a history of long-term endurance sport practice have an increased risk of AF compared with elderly men in the general population.
Practicing surgical tasks on bench models can be arranged in 3 ways: as the entire task, or as individual skills practiced separately in blocked or random order. The issue of the optimal practice schedule for the acquisition of surgical tasks is critical for enhancing training programs.
An orthopedic bone-plating task was practiced as a whole, or in parts in either a random or a blocked order. Learning was assessed on global ratings, checklists, and final product analysis before, immediately after, and an hour after practice.
Checklists, and final product analysis, but not the global ratings showed that practicing the entire task resulted in the most learning, followed by the random practice. Practice of the skills in a blocked order yielded the least amount of learning.
It is recommended that surgical tasks composed of several discrete skills should be practiced as a whole. However, if part practice is necessary, these skills should be arranged in random order to optimize learning.