The purpose of this study was to compare the diagnostic performance of a digital radiography system that uses 6- and 8-bit displays with conventional D-speed film for the detection of simulated periodontal bone lesions. Eleven human hemimandibles were used as specimens. Simulated lesions were created at the buccal cortical plate in the marginal bone area with the use of a round bur 1.4 mm in diameter. Lesions were created in a defined sequence to preclude visual cues as to the depth of the lesions. Lesion size progressed in 0.5 mm increments. At each stage the mandibles were imaged with a Sens-A-Ray system (REGAM Medical Systems AB, Sundsvall, Sweden) and D-speed film. Exposure parameters for each specimen/receptor combination were standardized by either the mean optical density or mean gray value at the approximal crestal bone area. Film images and digital images displayed with 64 and 256 gray levels were presented to six observers for evaluation. Observers were ask to rate their confidence as to the presence or absence of a lesion using a 5-point confidence scale. A total of 96 lesion sites and 96 control sites were presented to the observers. Receiver operating characteristic curves were generated for each system. The area under the curve was used as the index of diagnostic accuracy. The mean receiver operating characteristic areas for 6-bit and 8-bit displays and D-speed film were 0.746 +/- 0.043, 0.717 +/- 0.056 and 0.742 +/- 0.059, respectively. Analysis of variance was used to compare the means. No statistical difference was found between any of the three image displays (p > 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)
Division of Hepatology, Department of Upper GI, Karolinska University Hospital, Stockholm, Sweden; Clinical Epidemiology Unit, Department of Medicine, Solna, Karolinska Institutet, Stockholm, Sweden; Department of Medicine, Huddinge, Karolinska Institutet, Stockholm, Sweden. Electronic address: firstname.lastname@example.org.
Noninvasive scoring systems are used to identify persons with advanced liver fibrosis. We investigated the ability of scoring systems to identify individuals in the general population at risk for future liver-related events.
We collected data from the Swedish Apolipoprotein Mortality Risk cohort on persons 35 to 79 years old who had blood samples collected from 1985 through 1996. We collected APRI (n = 127,302), BARD (n = 75,303), FIB-4 (n = 126,941), Forns (n = 122,419), and the nonalcoholic fatty liver disease (NAFLD) fibrosis scores (NFS, n = 13,160). We ascertained incident cases of cirrhosis or complications by linking Swedish health data registers. Cox regression was used to estimate hazard ratios (HRs) for severe liver disease at 5, 10, and a maximum follow-up time of 27 years. The predictive ability of the scores was evaluated using area under the receiver operating characteristic (AUROC) curve and C-statistics analyses. Our specific aims were to investigate the predictive capabilities of scoring systems for fatal and nonfatal liver disease, determine which scoring system has the highest level of accuracy, and investigate the predictive abilities of the scoring systems in persons with a higher probability of NAFLD at baseline.
A similar proportion of individuals evaluated by each scoring system developed cirrhosis or complications thereof (1.0%-1.4%). The incidence of any outcome was increased in intermediate- and high-risk groups compared with low-risk groups, with HRs at 10 years in the high-risk group ranging from 1.67 for the BARD score to 45.9 for the APRI score. The predictive abilities of all scoring systems decreased with time and were higher in men. All scoring systems were more accurate in persons with risk factors for NAFLD at baseline, with AUROCs reaching 0.83.
Higher scores from noninvasive scoring systems to evaluate fibrosis are associated with an increased risk of cirrhosis in a general population, but their predictive ability is modest. Performance was better when patients were followed for shorter time periods and in persons with a higher risk of NAFLD, with AUROC values reaching 0.83. New scoring systems are needed to evaluate risk of fibrosis in the general population and in primary care.
Over the last decades, psychosocial factors were identified by many studies as significant predictive variables in the development of disability related to common low back disorders, which thus contributed to the development of biopsychosocial prevention interventions. Biopsychosocial interventions were supposed to be more effective than usual interventions in improving different outcomes. Unfortunately, most of these interventions show inconclusive results. The use of screening questionnaires was proposed as a solution to improve their efficacy. The aim of this study was to validate a new screening questionnaire to identify workers at risk of being absent from work for more than 182 cumulative days and who are more susceptible to benefit from prevention interventions.
Injured workers receiving income replacement benefits from the Quebec Compensation Board (n = 535) completed a 67-item questionnaire in the sub-acute stage of pain and provided information about work-related events 6 and 12 months later. Reliability and validity of the 67-item questionnaire were determined respectively by test-retest reliability and internal consistency analysis, as well as by construct validity analyses. The Cox regression model and the maximum likelihood method were used to fix a model allowing calculation of a probability of absence of more than 182 days. Criterion validity and discriminative capacity of this model were calculated.
Sub-sections from the 67-item questionnaire were moderately to highly correlated 2 weeks later (r = 0.52-0.80) and showed moderate to good internal consistency (0.70-0.94). Among the 67-item questionnaire, six sub-sections and variables (22 items) were predictive of long-term absence from work: fear-avoidance beliefs related to work, return to work expectations, annual family income before-taxes, last level of education attained, work schedule and work concerns. The area under the ROC curve was 73%.
The significant predictive variables of long-term absence from work were dominated by workplace conditions and individual perceptions about work. In association with individual psychosocial variables, these variables could contribute to identify potentially useful prevention interventions and to reduce the significant costs associated with LBP long-term absenteeism.
To define a grade in the Aesthetic Component (AC) of the Index of Orthodontic Treatment Need (IOTN) that would differentiate between esthetically acceptable and unacceptable occlusions and that would also be both subjectively and objectively meaningful.
Dental appearance and self-perceived orthodontic treatment need were analyzed in a group of Finnish young adults (171 males, 263 females, age range 16-25 years). Subjective data were gathered using a questionnaire, and the respondents were requested to score their dental appearance on a visual analog type 10-grade scale. Professional assessment of dental appearance was performed by two orthodontists using the AC of the IOTN. The cutoff value between esthetically acceptable and unacceptable occlusions was defined using receiver operating characteristic curves.
Sixty-six percent of orthodontically treated and 74% of the untreated respondents were satisfied with their own dental appearance. Every third respondent reported one or more disturbing traits in their dentition. The most frequently expressed reason for dissatisfaction was crowding; girls expressed dissatisfaction more often than boys did (P = .005). A self-perceived treatment need was reported infrequently by 8% of orthodontically treated and 6% of untreated respondents. In the logistic regression analysis, self-perceived need for orthodontic treatment was the only significant factor explaining dissatisfaction with own dental esthetics. On the applied scales, grades 1 and 2 fulfilled the criteria for satisfactory dental esthetics.
The results suggest that the AC grade 3 could serve as a cutoff value between esthetically acceptable and unacceptable occlusions.
The aim of the present study was to compare the ability of four clinical prediction rules to predict adverse outcome in perforated peptic ulcer (PPU): the Boey score, the American Society of Anesthesiologists (ASA) score, the Acute Physiology and Chronic Health Evaluation (APACHE) II score, and the sepsis score.
an observational multicenter study.
a total of 117 patients surgically treated for PPU between 1 January 2008 and 31 December 2009 in seven gastrointestinal departments in Denmark were included. Pregnant and breastfeeding women, non-surgically treated patients, patients with malignant ulcers, and patients with perforation of other organs were excluded.
30-day mortality rate.
the ability of four clinical prediction rules to distinguish survivors from non-survivors (discrimination ability) was evaluated by the area under the receiver operating characteristic curve (AUC), positive predictive values (PPVs), negative predictive values (NPVs), and adjusted relative risks.
Median age (range) was 70 years (25-92 years), 51% of the patients were females, and 73% of the patients had at least one co-existing disease. The 30-day mortality proportion was 17% (20/117). The AUCs: the Boey score, 0.63; the sepsis score, 0.69; the ASA score, 0.73; and the APACHE II score, 0.76. Overall, the PPVs of all four prediction rules were low and the NPVs high.
The Boey score, the ASA score, the APACHE II score, and the sepsis score predict mortality poorly in patients with PPU.
BACKGROUND: Despite its unsatisfactory specificity, rheumatoid factor (RF) is the only serologic marker included in the diagnostic criteria of the American College of Rheumatology (ACR) for rheumatoid arthritis. Recently, the diagnostic value of anti-cyclic citrullinated peptide (CCP) antibodies has been emphasized in rheumatoid arthritis (RA) due to its high specificity. To evaluate the second generation of anti-CCP antibodies as a diagnostic marker, we evaluated anti-CCP test in 163 individuals. METHODS: The study population was divided into the following four groups: RA group (n=18), other disease group with arthritic symptoms (n=44), other disease group without arthritic symptoms (n=45), and healthy group (n=56). Anti-CCP was measured by an ELISA analyzer (Coda, Bio-Rad, USA) with Immunoscan RA (Euro-Diagnostica, Malmo, Sweden) and RF was measured by an automated chemistry analyzer (Toshiba, Japan) with RF-LATEX X1 (Denka Seiken, Japan). RESULTS: The sensitivity of anti-CCP and RF was 72.2% and 100%, respectively, and the respective figures for the specificity were 96.6% and 73%. On each ROC curve, the area under the curve was 0.867 for anti-CCP and 0.959 for RF. In other disease groups, most of the false positive cases of RF were found in the patients with hyperlipidemia or HBV carriage. However, anti-CCP was not detected in any of the patients with these two conditions. False positive rates of RF in the three control groups were 34.1% in other disease group with arthritic symptoms, 48.9% in the other disease group without arthritic symptoms, and 3.6% in healthy group. The respective figures for anti-CCP were 6.8%, 2.2%, and 1.8%. CONCLUSIONS: The specificity of anti-CCP antibodies was higher than that of RF for discriminating RA from other diseases, especially in the patients with hyperlipidemia or HBV carriage. With its high specificity, anti-CCP antibodies can play an additive role in establishing the diagnosis of RA in patients with RF positivity.
We present a Finland-Swedish adaptation of the Sweden-Swedish group screening test for dyslexia for adults and young adults DUVAN (Lundberg & Wolff, 2003) together with normative data from 143 Finland-Swedish university students. The test is based on the widely held phonological deficit hypothesis of dyslexia and consists of a self-report and five subtests tapping phonological working memory, phonological representation, phonological awareness, and orthographic skill. We describe the test adaptation procedure and show that the internal reliability of the new test version is comparable to the original one. Our results indicate that the language background (Swedish, Finnish, early simultaneous Swedish-Finnish bilingualism) should be taken into account when interpreting the results on the Finland-Swedish DUVAN test. We show that the FS-DUVAN differentiates a group of students with dyslexia diagnosis from normals, and that a low performance on the FS-DUVAN correlates with a positive self-report on familial dyslexia and with a history of special education in school. Finally, we analyze the sensitivity and specificity of the FS-DUVAN for dyslexia among university students.
The objective of this study was to evaluate the added predictive ability of the CHA(2)DS(2)VASc prediction rule for stroke and death in a nonanticoagulated population of patients with atrial fibrillation.
We included 1603 nonanticoagulated patients with incident atrial fibrillation from a Danish prospective cohort study of 57 053 middle-aged men and women. The Net Reclassification Improvement was calculated as a measure to estimate any overall improvement in reclassification with the CHA(2)DS(2)VASc sore as an alternative to the CHADS(2) score. After 1-year follow-up, crude incidence rates were 3.4 per 100 person-years for stroke and 13.6 for death. After a mean follow-up of 5.4 years (± 3.7 years), the crude incidence rates for stroke and death were 1.9 and 5.6, respectively. During the entire observation period, the c-statistics and negative predictive values were similar for both risk scores. The Net Reclassification Improvement analysis showed that 1 of 10 reclassified atrial fibrillation patients would have been upgraded correctly using the CHA(2)DS(2)VASc score.
Both the CHADS(2) as well as the CHA(2)DS(2)VASc risk score can exclude a large proportion of patients from having high risk of stroke or death. However, using the CHA(2)DS(2)VASc risk score, fewer patients will fulfill the criterion for low risk (and are truly low risk for thromboembolism). For every 10 extra patients transferred to the treatment group at 5 years, using the CHA(2)DS(2)VASc risk score, 1 patient would have had a stroke that might have been avoided with effective treatment.
The added value of the combined use of the Autism Diagnostic Interview-Revised and the Autism Diagnostic Observation Schedule: diagnostic validity in a clinical Swedish sample of toddlers and young preschoolers.
The diagnostic validity of the new research algorithms of the Autism Diagnostic Interview-Revised and the revised algorithms of the Autism Diagnostic Observation Schedule was examined in a clinical sample of children aged 18-47 months. Validity was determined for each instrument separately and their combination against a clinical consensus diagnosis. A total of N = 268 children (n = 171 with autism spectrum disorder) were assessed. The new Autism Diagnostic Interview-Revised algorithms (research cutoff) gave excellent specificities (91%-96%) but low sensitivities (44%-52%). Applying adjusted cutoffs (lower than recommended based on receiver operating characteristics) yielded a better balance between sensitivity (77%-82%) and specificity (60%-62%). Findings for the Autism Diagnostic Observation Schedule were consistent with previous studies showing high sensitivity (94%-100%) and alongside lower specificity (52%-76%) when using the autism spectrum cutoff, but better balanced sensitivity (81%-94%) and specificity (81%-83%) when using the autism cutoff. A combination of both the Autism Diagnostic Interview-Revised (with adjusted cutoff) and the Autism Diagnostic Observation Schedule (autism spectrum cutoff) yielded balanced sensitivity (77%-80%) and specificity (87%-90%). Results favor a combined usage of the Autism Diagnostic Interview-Revised and Autism Diagnostic Observation Schedule in young children with unclear developmental problems, including suspicion of autism spectrum disorder. Evaluated separately, the Autism Diagnostic Observation Schedule (cutoff for autism) provides a better diagnostic accuracy than the Autism Diagnostic Interview-Revised.
OBJECTIVE: To evaluate the efficacy of biochemical tests in diagnosing acute appendicitis. DESIGN: Open prospective study. SETTING: District hospital, Norway. SUBJECTS: 257 patients with suspected acute appendicitis. INTERVENTIONS: Initial diagnostic accuracy of a logistic regression model using available clinical data was compared with results of corresponding models that included an increasing number of inflammatory parameters. MAIN OUTCOME MEASURES: The estimated probabilities of appendicitis in different testing groups were analysed using receiver operating characteristic (ROC) curves. RESULTS: A model including only clinical variables had a mean area under the ROC curve of 0.854. When the total white blood cell count, C-reactive protein concentration, and neutrophil count were added, the model improved significantly to 0.920. CONCLUSION: Biochemical tests are of additional value in a computer model, and the tests should, if used rationally, also provide physicians with important information in the investigation of acute appendicitis.