Discrete classification of smokers by intention to quit is desirable in many public health and clinical settings.
Two methodological studies examine measurement properties of measures of discrete-time intention to quit smoking used in population-based tobacco surveillance surveys: an ecological comparison of rates of positive intention in relation to the form of measure used and a prospective analysis examining predictive validity of self-reported quit intentions using multiple possible points of dichotomization of an ordinal measure of intention to quit. The prospective analysis used a repeated measures design and follow-up to 1 year for 2,047 smokers in the Ontario Tobacco Survey cohort.
The estimated percent of smokers intending to quit was significantly higher using the Stages of Change intention measure, relative to another single question measure. Significant dose-response effects were found. The sooner one intended to quit the more likely one was to make an attempt or achieve at least 30 days abstinence in the next 6 months. Intending to quit in a month or later was not associated with cessation during follow-up among respondents without prior attempts. Examination of cutpoints revealed no value, which maximized both positive and negative prediction. Regardless of quit attempt history, greatest predictive validity was found where respondents stated that they had no intention at all.
Measures of intentions quit smoking in specific time periods and expressed as dichotomies have limited psychometric properties but utility in applied research. Our findings suggest a possible measurement effect warranting caution in comparisons across studies.
The aim of the study was to measure occupational exposure to electric and magnetic fields during various work tasks at switching and transforming stations of 110 kV (in some situations 20 kV), and analyze if the action values of European Union Directive 2004/40/EC or reference values of International Commission on Non-ionizing Radiation Protection (ICNIRP) were exceeded. The electric (n = 765) and magnetic (n = 203) fields were measured during various work tasks. The average values of all measurements were 3.6 kV m(-1) and 28.6 µT. The maximum value of electric fields was 15.5 kV m(-1) at task 'maintenance of operating device of circuit breaker from service platform'. In one special work task close to shunt reactor cables (20 kV), the highest magnetic field was 710 µT. In general, the measured magnetic fields were below the reference values of ICNIRP.
The objective of this study was to identify factors associated with a positive outcome of vocational rehabilitation, and to identify groups that have been successfully rehabilitated in a Swedish rural area. In this study vocational rehabilitation is defined as medical multidisciplinary, psychological, social and occupational activities aiming to re-establish, among sick or injured people with previous work history, their working capacity and prerequisites for returning to the labour market. The study was based on 732 people on registered long-term sick-leave who, in a rural area in northern Sweden during 1992-94, became objects for vocational rehabilitation. Bivariate and stepwise logistic regression analysis was used to identify factors associated with the outcome. By successful vocational rehabilitation is meant reporting well (no economical benefit) at all three time-points 6, 12 and 24 months after termination of rehabilitation, or lowered benefit levels. The results indicate that younger, male, employed persons, with an early start on rehabilitation, in a programme entailing education, and partly sick-listed before the start of this programme, had the greatest chance of successful rehabilitation. In contrast, older, female, unemployed people, with a delayed start on rehabilitation, without education, and fully sick-listed before the start, greatly risked being unsuccessful with vocational rehabilitation. The results indicate how to improve the rehabilitation process: several process-related factors shown to be connected with successful vocational rehabilitation include time before the start of rehabilitation, partial instead of full sickness benefit, and education programmes.
Managers of marine protected areas (MPAs) must often seek ways to allow for visitation while minimizing impacts to the resources they are intended to protect. Using shipboard observers, we quantified the "zone of disturbance" for Kittlitz's and marbled murrelets (Brachyramphus brevirostris and B. marmoratus) exposed to large cruise ships traveling through Glacier Bay National Park, one of the largest MPAs in North America. In the upper reaches of Glacier Bay, where Kittlitz's murrelets predominated, binary logistic regression models predicted that 61% of all murrelets within 850 m perpendicular distance of a cruise ship were disturbed (defined as flushing or diving), whereas in the lower reaches, where marbled murrelets predominated, this percentage increased to 72%. Using survival analysis, murrelets in both reaches were found to react at greater distances when ships approached indirectly, presumably because of the ship's larger profile, suggesting murrelets responded to visual rather than audio cues. No management-relevant covariates (e.g., ship velocity, route distance from shore) were found to be important predictors of disturbance, as distance from ship to murrelet accounted for > 90% of the explained variation in murrelet response. Utilizing previously published murrelet density estimates from Glacier Bay, and applying an average empirical disturbance probability (68%) out to 850 m from a cruise ship's typical route, we estimated that a minimum of 9.8-19.6% of all murrelets in Glacier Bay are disturbed per ship entry. Whether these disturbance levels are inconsistent with Park management objectives, which include conserving wildlife as well as providing opportunities for visitation, depends in large part on whether disturbance events caused by cruise ships have impacts on murrelet fitness, which remains uncertain.
Cites: Ecol Appl. 2011 Jul;21(5):1851-6021830723
Cites: J Environ Manage. 2009 Jan;90(1):531-818222029
Cites: Environ Manage. 2012 Jan;49(1):44-5421983996
Undergraduate participants were tested in 144 pairs, with one member of each pair randomly assigned to a "witness" role and the other to an "investigator" role. Each witness viewed a target person on video under good or poor witnessing conditions and was then interviewed by an investigator, who administered a photo line up and rated his or her confidence in the witness. Witnesses also (separately) rated their own confidence. Investigators discriminated between accurate and inaccurate witnesses, but did so less well than witnesses' own confidence ratings and were biased toward accepting witnesses' decisions. Moreover, investigators' confidence made no unique contribution to the prediction of witnesses' accuracy. Witnesses' confidence and accuracy were affected in the same direction by witnessing conditions, and there was a substantial confidence-accuracy correlation when data were collapsed across witnessing conditions. Confidence can be strongly indicative of accuracy when witnessing conditions vary widely, and witnesses' confidence may be a better indicator than investigators'.
In this study, we examined the validity and clinical utility of the MMPI-2 (Butcher, Graham, Tellegen, Dahlstrom, & Kaemmer, 2001) Malingering Depression scale (Md) in relation to the MMPI-2 F scales (F, F(B), F(P)) to detect feigned depression. Overall, the F(B) scale and the F/F(P) scale combination were the best single predictors, although the Md scale did discriminate successfully cases of feigned depression from patients with bona fide depression. The Md scale added predictive capacity over the F scales, and the F(B) scale and the F/F(P) scale combination added predictive capacity over the Md scale; however, the actual increase in the number of cases predicted was minimal in each instance. In sum, although the Md scale is able to detect accurately feigned depression on the MMPI-2 (predictive validity), it does not confer a distinct advantage (incremental validity) over the existing standard validity scales-F, F(B), and F(P).
Modified Duke criteria were applied to consecutive injection drug users (IDUs) who were admitted to an inner-city hospital with a clinical suspicion of infective endocarditis, and the presence of any other clinical variables that were predictive of the presence of infective endocarditis was determined.
Clinical data on consecutive IDUs who were hospitalized over 15 months in Vancouver were collected. Data included the admission history, and findings on physical examination and on initial laboratory investigations. Each subject's course in hospital was followed until discharge or death during the index hospitalization. Follow-up data collected included culture results, the interpretation of the echocardiogram and the discharge diagnosis. The modified Duke criteria were used for the diagnosis of infective endocarditis (definite, possible or rejected). Multiple logistic regression was used to determine what clinical variables (exclusive of the Duke criteria) available within 48 hours of presentation were independent predictors of infective endocarditis.
One hundred IDUs were enrolled. Fifty-one were female, and 58 were HIV-positive. Twenty-three met the modified Duke criteria for definite infective endocarditis, and 25 had possible infective endocarditis. IDUs with definite infective endocarditis were more commonly noted to have evidence of vascular phenomena (arterial embolism, septic pulmonary infarction, mycotic aneurysm, intracranial hemorrhage or Janeway lesions) (6 [26%]) than those who had possible endocarditis (1 [4%]). Those with definite infective endocarditis more often had multiple opacities on chest radiography (56% v.
Because acquired immunodeficiency syndrome (AIDS) is a shifting endpoint and sufficient follow-up data now allow modeling of survival time (i.e., time from human immunodeficiency virus (HIV) seroconversion to death), the authors evaluated non-parametric and parametric models of mortality with the use of data from 554 seropositive participants in the Vancouver Lymphadenopathy-AIDS Study. The authors then applied these models to quantify treatment benefits at the national level in Canada, using back-calculation and forward-projection based on death registries. The study revealed that the lognormal model better describes survival time than the Weibull model. Relative to observations prior to 1987, later observations (in the era of treatment) revealed a statistically significant change in disease progression: the median survival time increased from 10.1 to 12.0 years, but no further survival improvements were observed in the early 1990s. Concurrent with the increase in availability of treatment, the authors have observed pronounced treatment benefits at the national level: prior to 1995, approximately 1,500 deaths were prevented and 4,200 person-years of life were saved. Also, mortality rates were observed to level off in the mid-1990s due to the shape of the historical HIV infection curve and the accumulating availability of treatment in Canada.
The primary objective of this research was to evaluate the implications of extending specific Canadian Motor Vehicle Safety Standards (CMVSS) to light trucks and vans (LTVs). This was accomplished through the examination of the potential safety-related benefits of these standards comparing the injury frequencies and severities of the light trucks and vans and the passenger cars (PCs). The standards considered, which currently apply to passenger cars but not to LTVs, are the head restraint (CMVSS 202), side door strength (CMVSS 214), and roof crush strength (CMVSS 216) standards. The comparison was effected by means of logit models developed from multidimensional tables with injury frequency and severity as dependent variables. There are indications that installing head restraints in light trucks and vans could reduce or prevent minor neck injuries and that modest benefits could be achieved by extending the roof crush standard to the LTVs. It was also determined that the side door strength standard may not necessarily be as beneficial to LTVs in conditions in which the vehicle is struck on the side by another LTV. It is suggested that the general public be made aware of the differences in safety standards between LTVs and PCs.