To propose and evaluate a novel nonrigid image registration approach for improved myocardial T1 mapping.
Myocardial motion is estimated as global affine motion refined by a novel local nonrigid motion estimation algorithm. A variational framework is proposed, which simultaneously estimates motion field and intensity variations, and uses an additional regularization term to constrain the deformation field using automatic feature tracking. The method was evaluated in 29 patients by measuring the DICE similarity coefficient and the myocardial boundary error in short axis and four chamber data. Each image series was visually assessed as "no motion" or "with motion." Overall T1 map quality and motion artifacts were assessed in the 85 T1 maps acquired in short axis view using a 4-point scale (1-nondiagnostic/severe motion artifact, 4-excellent/no motion artifact).
Increased DICE similarity coefficient (0.78 ± 0.14 to 0.87 ± 0.03, P
This paper presents an automated procedure developed to extract quantitative information from video recordings of neonatal seizures in the form of motor activity signals. This procedure relies on optical flow computation to select anatomical sites located on the infants' body parts. Motor activity signals are extracted by tracking selected anatomical sites during the seizure using adaptive block matching. A block of pixels is tracked throughout a sequence of frames by searching for the most similar block of pixels in subsequent frames; this search is facilitated by employing various update strategies to account for the changing appearance of the block. The proposed procedure is used to extract temporal motor activity signals from video recordings of neonatal seizures and other events not associated with seizures.
In this study, a fully automated texture-based segmentation and recognition system for lesion and lungs from CT of thorax is presented. For the segmentation part, we have extracted texture features by Gabor filtering the images, and, then combined these features to segment the target volume by using Fuzzy C Means (FCM) clustering. Since clustering is sensitive to initialization of cluster prototypes, optimal initialization of the cluster prototypes was done by using a Genetic Algorithm. For the recognition stage, we have used cortex like mechanism for extracting statistical features in addition to shape-based features. The segmented regions showed a high degree of imbalance between positive and negative samples, so we employed over and under sampling for balancing the data. Finally, the balanced and normalized data was subjected to Support Vector Machine (SimpleSVM) for training and testing. Results reveal an accuracy of delineation to be 94.06%, 94.32% and 89.04% for left lung, right lung and lesion, respectively. Average sensitivity of the SVM classifier was seen to be 89.48%.
In this work, an accurate method to register multi-view images of the human torso is developed. In particular, a new framework that incorporates prior statistical knowledge about the registration is developed and tested. This framework leads to a computationally efficient procedure to accurately align images of the human torso. An intensity based image registration procedure is used to obtain the deformation fields by modelling them as both locally affine and globally smooth. Next, the estimated geometric deformation fields are analyzed in order to construct a prior deformation model. Two subspace analysis projection techniques are used to construct the subspaces of plausible deformations, namely principal component analysis (PCA) and independent component analysis (ICA). Accurate deformations are now guaranteed by projecting the locally computed geometric transformations onto the subspaces of plausible deformations. The proposed registration method was validated using high resolution images of the human torso. In order to handle the high resolution images, a multi-resolution framework was employed in the registration process. Experiments demonstrate promising performance in terms of mean square error and in the computational complexity. The main contribution of this work is the development of image registration method that uses subspace constraints to align images of the human torso. This method did not use the intra and inter image constraints used in most intensity based image registration algorithms in the literature.
INTRODUCTION: Electrogram morphology analysis improves discrimination of supraventricular tachycardias (SVTs) from ventricular tachycardias (VTs) in implantable cardioverter defibrillators (ICDs), but electrogram morphology may change with lead maturation, drugs, or disease progression. We report the clinical performance of an automatic algorithm that creates and updates templates from non-paced, slow rhythm and continuously checks the quality of the template used for arrhythmia discrimination. METHODS AND RESULTS: We studied this algorithm in 193 patients with single-chamber ICDs (Marquis VR, Medtronic Inc., Minneapolis, MN, USA). Of the 112 patients who completed 6-month follow-up, 99.1% of the patients had > or =1 automatic template created. Match scores between template and ongoing rhythm are computed using Haar Wavelets. Of the 435 automatic templates evaluated at follow-up, 423 (97.2%) had a median match score > or =70%. Intrinsic rhythm at 1 month had significantly higher match scores (P
In this work the application of different machine learning techniques for classification of mental tasks from electroencephalograph (EEG) signals is investigated. The main application for this research is the improvement of brain computer interface (BCI) systems. For this purpose, Bayesian graphical network, neural network, Bayesian quadratic, Fisher linear and hidden Markov model classifiers are applied to two known EEG datasets in the BCI field. The Bayesian network classifier is used for the first time in this work for classification of EEG signals. The Bayesian network appeared to have a significant accuracy and more consistent classification compared to the other four methods. In addition to classical correct classification accuracy criteria, the mutual information is also used to compare the classification results with other BCI groups.
OBJECTIVE: The purpose of this study was to evaluate the performance and potential contribution of computer-aided detection (CAD) to independent double reading of paired screen-film and full-field digital screening mammograms. MATERIALS AND METHODS: The cases of 3,683 women who underwent both screen-film mammography and full-field digital mammography (FFDM) with independent double reading for each technique were followed for 2 years to include cancers detected in the interval between screening rounds and cancers detected at the next screening round. Fifty-five biopsy-proven cancers were diagnosed. The baseline screening mammograms of the 55 cancers were defined as having positive findings if at least one of two independent readers scored it 2 or higher on a 5-point rating scale. The baseline mammograms of interval (n = 10) or secondround (n = 16) cancers were retrospectively classified as overlooked (n = 2), minimal sign actionable (n = 8), minimal sign nonactionable (n = 5), and normal (n = 11). The baseline mammograms of these cases of cancer were evaluated with a CAD system, and the CAD results were compared (McNemar's test for paired proportions) with the findings at prospective independent double reading of mammograms obtained with each technique. RESULTS: For FFDM, CAD sensitivity was 95% (37/39) compared with 64% (25/39) for double reading (p = 0.006), and for screen-film mammography, CAD sensitivity was 85% (33/39) compared with 77% (30/39) for prospective double reading (p = 0.57) of radiographically visible lesions in baseline mammograms. CAD correctly marked five (13%) of 39 cancers on screen-film mammography and 14 (36%) of 39 cancers on FFDM not detected at prospective independent double reading. CONCLUSION: CAD showed the potential to increase the cancer detection rate for FFDM and for screen-film mammography in breast cancer screening performed with independent double reading.
For the 2014 i2b2/UTHealth de-identification challenge, we introduced a new non-parametric Bayesian hidden Markov model using a Dirichlet process (HMM-DP). The model intends to reduce task-specific feature engineering and to generalize well to new data. In the challenge we developed a variational method to learn the model and an efficient approximation algorithm for prediction. To accommodate out-of-vocabulary words, we designed a number of feature functions to model such words. The results show the model is capable of understanding local context cues to make correct predictions without manual feature engineering and performs as accurately as state-of-the-art conditional random field models in a number of categories. To incorporate long-range and cross-document context cues, we developed a skip-chain conditional random field model to align the results produced by HMM-DP, which further improved the performance.
Quality of life (QoL) is an important end point in heart failure (HF) studies. The Minnesota Living with Heart Failure questionnaire (MLHFQ) is the instrument most widely used to evaluate QoL in Heart Failure (HF) patients. It is a questionnaire containing 21 questions with scores ranging from 0 to 105. A best cut-off value for MLHFQ scores to identify those patients with good, moderate or poor QoL has not been determined.
To determine a cut-off score for the MLHFQ based on the neural network (NN) approach. These cut-off scores will help discriminate between HF patients having good, moderate or poor QoL.
This research was carried out in the context of a longitudinal cohort study of new patients attending specialized HF clinics in six participating centers in Quebec, Canada. Patients completed a questionnaire that included the MLHFQ. In addition to this scale, self-perceived health status and clinical information related to the severity of HF were obtained including: the New York Heart Association (NYHA) functional class, 6 minute walk test and survival status. We analyzed the database using NN and conventional statistical tools. The NN is a statistical program that recognizes clusters of MLHFQ and relates similar QoL measures to one another. Among the 531 eligible patients, 447 patients with complete questionnaires were used to build randomly two sets for training (learning set) and for testing (validation set) the NN.
Participants had a mean age of 65 years and 24% were women. The median MLHFQ score was 45 (inter-quartile range: 27 to 64). NN identified 3 distinct clusters of MLHFQ that represent the full spectrum of possible scores on the MLHFQ. We estimated that a score of 45 represents a poor QoL. Validation with the different severity measures confirmed these categories. These cut-offs allowed us to reach a good total accuracy (91%). These cutoffs were strongly correlated with survival status (p = 0.004), self-perceived health status (p = 0.0032), NYHA functional class (p
Diabetic retinopathy (DR) is a complication of diabetes, which if untreated leads to blindness. DR early diagnosis and treatment improve outcomes. Automated assessment of single lesions associated with DR has been investigated for sometime. To improve on classification, especially across different ethnic groups, we present an approach using points-of-interest and visual dictionary that contains important features required to identify retinal pathology. Variation in images of the human retina with respect to differences in pigmentation and presence of diverse lesions can be analyzed without the necessity of preprocessing and utilizing different training sets to account for ethnic differences for instance.