journal-title
stringclasses
191 values
pmid
stringlengths
8
8
pmc
stringlengths
10
11
doi
stringlengths
12
31
article-title
stringlengths
11
423
abstract
stringlengths
18
3.69k
related-work
stringlengths
12
84k
references
sequencelengths
0
206
reference_info
listlengths
0
192
Scientific Reports
27578529
PMC5006166
10.1038/srep32404
Large-Scale Discovery of Disease-Disease and Disease-Gene Associations
Data-driven phenotype analyses on Electronic Health Record (EHR) data have recently drawn benefits across many areas of clinical practice, uncovering new links in the medical sciences that can potentially affect the well-being of millions of patients. In this paper, EHR data is used to discover novel relationships between diseases by studying their comorbidities (co-occurrences in patients). A novel embedding model is designed to extract knowledge from disease comorbidities by learning from a large-scale EHR database comprising more than 35 million inpatient cases spanning nearly a decade, revealing significant improvements on disease phenotyping over current computational approaches. In addition, the use of the proposed methodology is extended to discover novel disease-gene associations by including valuable domain knowledge from genome-wide association studies. To evaluate our approach, its effectiveness is compared against a held-out set where, again, it revealed very compelling results. For selected diseases, we further identify candidate gene lists for which disease-gene associations were not studied previously. Thus, our approach provides biomedical researchers with new tools to filter genes of interest, thus, reducing costly lab studies.
Background and related workIn the treatment of ailments, the focus of medical practitioners can be roughly divided between two complementary approaches: 1) treating the symptoms of already sick patients (reactive medicine); and 2) understanding disease etiology in order to prevent manifestation and further spread of the disease (preventative medicine). In the first approach, the disease symptoms are a part of a broader phenotype profile of an individual, with phenotype being defined as the presence of a specific observable characteristic in an organism, such as blood type, response to administered medication, or the presence of a disease13. The identification process of useful, meaningful medical characteristics and insights for the purposes of medical treatment is referred to as phenotyping14. In the second approach, researchers identify the genetic basis of disease by discovering the relationship between exhibited phenotypes and the patient’s genetic makeup in a process refereed to as genotyping15. Establishing a relationship between a phenotype and its associated genes is a major component of gene discovery and allows biomedical scientists to gain a deeper understanding of the condition and a potential cure at its very origin16. Gene discovery is a central problem in a number of published disease-gene association studies, and its prevalence in the scientific community is increasing steadily as novel discoveries lead to improved medical care. For example, results in the existing literature show that gene discovery allows clinicians to better understand the severity of patients symptoms17, to anticipate onset and path of disease progressions (particularly important for cancer patients in later stages18), or to better understand disease processes on a molecular level enabling the development of better treatments19. As suggested in previous studies20, such knowledge may be hidden in vast EHR databases that are yet to be exploited to their fullest potential. Clearly, both phenotyping and gene discovery are important steps in the fight for global health, and advancing tools for these tasks is a critical part of this battle. The emerging use of gene editing techniques to precisely target disease genes21 will require such computational tools at precision medicine’s disposal.EHR records, containing abundant information relating to patients’ phenotypes that have been generated from actual clinical observations and physician-patient interactions, present an unprecedented resource and testbed to apply novel phenotyping approaches. Moreover, the data is complemented by large amounts of gene-disease associations derived from readily available genome-wide association studies. However, current approaches for phenotyping and gene discovery using EHR data rely on highly supervised rule-based or heuristic-based methods, which require manual labor and often a consensus of medical experts22. This severely limits the scalability and effectiveness of the process3. Some researchers proposed to combat this issue by employing active learning approaches to obtain limited number of expert labels used by supervised methods2324. Nevertheless, the state-of-the-art is far from optimal as the labeling process can still be tedious, and models require large numbers of labels to achieve satisfactory performance on noisy EHR data3. Therefore, we approach solving this problem in an unsupervised manner.Early work on exploiting EHR databases to understand human disease focused on graphical representations of diseases, genes, and proteins. Disease networks were proposed in Goh et al.25 where certain genes play a central role in the human disease interactome, which is defined as all interactions (connections) of diseases, genes, and proteins discovered on humans. Follow up studies by Hidalgo et al.26 proposed human phenotypic networks (commonly referred to as comorbidity networks) to map with disease networks derived from EHR datasets, which were shown to successfully associate a higher connectivity of diseases with higher mortality. Based on these advances, a body of work linked predictions of disease-disease and disease-gene networks627 even when a mediocre degree of correlation (~40%, also confirmed on data used in this study) was detected between disease and gene networks, indicating potential causality between them. Such studies provided important evidence of modeling disease and human interactome networks to discover associated phenotypes. Recently, network studies of the human interactome have focused on uncovering patterns28 and, as the human interactome is incomplete, discovering novel relationships5. However, it has been suggested that network-based approaches to phenotyping and discoveries of meaningful concepts in medicine have yet to be fully exploited and tested29. This study offers a novel approach to represent diseases and genes by utilizing the same sources of data as network approaches, but in a different manner, as discussed in greater detail in the section, below.In addition, to create more scalable, effective tools, recent approaches distinct from networks have focused on the development of data-driven phenotyping with minimal manual input and rigorous evaluation procedures33031. Part of the emerging field of computational phenotyping includes the methods of Zhou et al.32 which formulates EHRs as temporal matrices of medical events for each patient, and proposes an optimization-based technology for discovering temporal patterns of medical events as phenotypes. Further, Ho et al.33 formulated patient EHRs as tensors, where each dimension is represented by a different medical event, and the use of non-negative tensor factorization in the identification of phenotypes. Deep learning has also been applied to the task of phenotyping30, as well as graph mining31 and clustering34, used to identify patient subgroups based on individual clinical markers. Finally, Žitnik et al.35, conducted a study on non-negative matrix factorization techniques for fusing various molecular data to uncover disease-disease associations and show that available domain knowledge can help reconstruct known and obtain novel associations. Nonetheless, the need for a comprehensive procedure to obtain manually labeled samples remains one of the main limitations of modern phenotyping tools14. Although state-of-the-art machine learning methods have been utilized to automate the process, current approaches still observe degraded performance in the face of limited availability of labeled samples that are manually annotated by medical experts36.In this paper, we compare representatives of the above approaches against our proposed approach in a fair setup and, overall, demonstrate the benefits of our neural embedding approach (described below) on several tasks in a quantifiable manner.
[ "21587298", "22955496", "24383880", "10874050", "26506899", "11313775", "21941284", "23287718", "21269473", "17502601", "25038555", "22127105", "24097178", "16723398", "25841328", "2579841" ]
[ { "pmid": "21587298", "title": "Using electronic health records to drive discovery in disease genomics.", "abstract": "If genomic studies are to be a clinically relevant and timely reflection of the relationship between genetics and health status--whether for common or rare variants--cost-effective ways must be found to measure both the genetic variation and the phenotypic characteristics of large populations, including the comprehensive and up-to-date record of their medical treatment. The adoption of electronic health records, used by clinicians to document clinical care, is becoming widespread and recent studies demonstrate that they can be effectively employed for genetic studies using the informational and biological 'by-products' of health-care delivery while maintaining patient privacy." }, { "pmid": "22955496", "title": "Next-generation phenotyping of electronic health records.", "abstract": "The national adoption of electronic health records (EHR) promises to make an unprecedented amount of data available for clinical research, but the data are complex, inaccurate, and frequently missing, and the record reflects complex processes aside from the patient's physiological state. We believe that the path forward requires studying the EHR as an object of interest in itself, and that new models, learning from data, and collaboration will lead to efficient use of the valuable information currently locked in health records." }, { "pmid": "24383880", "title": "New mini- zincin structures provide a minimal scaffold for members of this metallopeptidase superfamily.", "abstract": "BACKGROUND\nThe Acel_2062 protein from Acidothermus cellulolyticus is a protein of unknown function. Initial sequence analysis predicted that it was a metallopeptidase from the presence of a motif conserved amongst the Asp-zincins, which are peptidases that contain a single, catalytic zinc ion ligated by the histidines and aspartic acid within the motif (HEXXHXXGXXD). The Acel_2062 protein was chosen by the Joint Center for Structural Genomics for crystal structure determination to explore novel protein sequence space and structure-based function annotation.\n\n\nRESULTS\nThe crystal structure confirmed that the Acel_2062 protein consisted of a single, zincin-like metallopeptidase-like domain. The Met-turn, a structural feature thought to be important for a Met-zincin because it stabilizes the active site, is absent, and its stabilizing role may have been conferred to the C-terminal Tyr113. In our crystallographic model there are two molecules in the asymmetric unit and from size-exclusion chromatography, the protein dimerizes in solution. A water molecule is present in the putative zinc-binding site in one monomer, which is replaced by one of two observed conformations of His95 in the other.\n\n\nCONCLUSIONS\nThe Acel_2062 protein is structurally related to the zincins. It contains the minimum structural features of a member of this protein superfamily, and can be described as a \"mini- zincin\". There is a striking parallel with the structure of a mini-Glu-zincin, which represents the minimum structure of a Glu-zincin (a metallopeptidase in which the third zinc ligand is a glutamic acid). Rather than being an ancestral state, phylogenetic analysis suggests that the mini-zincins are derived from larger proteins." }, { "pmid": "10874050", "title": "Impact of genomics on drug discovery and clinical medicine.", "abstract": "Genomics, particularly high-throughput sequencing and characterization of expressed human genes, has created new opportunities for drug discovery. Knowledge of all the human genes and their functions may allow effective preventive measures, and change drug research strategy and drug discovery development processes. Pharmacogenomics is the application of genomic technologies such as gene sequencing, statistical genetics, and gene expression analysis to drugs in clinical development and on the market. It applies the large-scale systematic approaches of genomics to speed the discovery of drug response markers, whether they act at the level of the drug target, drug metabolism, or disease pathways. The potential implication of genomics and pharmacogenomics in clinical research and clinical medicine is that disease could be treated according to genetic and specific individual markers, selecting medications and dosages that are optimized for individual patients. The possibility of defining patient populations genetically may improve outcomes by predicting individual responses to drugs, and could improve safety and efficacy in therapeutic areas such as neuropsychiatry, cardiovascular medicine, endocrinology (diabetes and obesity) and oncology. Ethical questions need to be addressed and guidelines established for the use of genomics in clinical research and clinical medicine. Significant achievements are possible with an interdisciplinary approach that includes genetic, technological and therapeutic measures." }, { "pmid": "26506899", "title": "Standardized phenotyping enhances Mendelian disease gene identification.", "abstract": "Whole-exome sequencing has revolutionized the identification of genes with dominant disease-associated variants for rare clinically and genetically heterogeneous disorders, but the identification of genes with recessive disease-associated variants has been less successful. A new study now provides a framework integrating Mendelian variant filtering with statistical assessments of patients' genotypes and phenotypes, thereby catalyzing the discovery of novel mutations associated with recessive disease." }, { "pmid": "11313775", "title": "The family based association test method: strategies for studying general genotype--phenotype associations.", "abstract": "With possibly incomplete nuclear families, the family based association test (FBAT) method allows one to evaluate any test statistic that can be expressed as the sum of products (covariance) between an arbitrary function of an offspring's genotype with an arbitrary function of the offspring's phenotype. We derive expressions needed to calculate the mean and variance of these test statistics under the null hypothesis of no linkage. To give some guidance on using the FBAT method, we present three simple data analysis strategies for different phenotypes: dichotomous (affection status), quantitative and censored (eg, the age of onset). We illustrate the approach by applying it to candidate gene data of the NIMH Alzheimer Disease Initiative. We show that the RC-TDT is equivalent to a special case of the FBAT method. This result allows us to generalise the RC-TDT to dominant, recessive and multi-allelic marker codings. Simulations compare the resulting FBAT tests to the RC-TDT and the S-TDT. The FBAT software is freely available." }, { "pmid": "21941284", "title": "A decade of exploring the cancer epigenome - biological and translational implications.", "abstract": "The past decade has highlighted the central role of epigenetic processes in cancer causation, progression and treatment. Next-generation sequencing is providing a window for visualizing the human epigenome and how it is altered in cancer. This view provides many surprises, including linking epigenetic abnormalities to mutations in genes that control DNA methylation, the packaging and the function of DNA in chromatin, and metabolism. Epigenetic alterations are leading candidates for the development of specific markers for cancer detection, diagnosis and prognosis. The enzymatic processes that control the epigenome present new opportunities for deriving therapeutic strategies designed to reverse transcriptional abnormalities that are inherent to the cancer epigenome." }, { "pmid": "23287718", "title": "Multiplex genome engineering using CRISPR/Cas systems.", "abstract": "Functional elucidation of causal genetic variants and elements requires precise genome editing technologies. The type II prokaryotic CRISPR (clustered regularly interspaced short palindromic repeats)/Cas adaptive immune system has been shown to facilitate RNA-guided site-specific DNA cleavage. We engineered two different type II CRISPR/Cas systems and demonstrate that Cas9 nucleases can be directed by short RNAs to induce precise cleavage at endogenous genomic loci in human and mouse cells. Cas9 can also be converted into a nicking enzyme to facilitate homology-directed repair with minimal mutagenic activity. Lastly, multiple guide sequences can be encoded into a single CRISPR array to enable simultaneous editing of several sites within the mammalian genome, demonstrating easy programmability and wide applicability of the RNA-guided nuclease technology." }, { "pmid": "21269473", "title": "The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies.", "abstract": "INTRODUCTION\nThe eMERGE (electronic MEdical Records and GEnomics) Network is an NHGRI-supported consortium of five institutions to explore the utility of DNA repositories coupled to Electronic Medical Record (EMR) systems for advancing discovery in genome science. eMERGE also includes a special emphasis on the ethical, legal and social issues related to these endeavors.\n\n\nORGANIZATION\nThe five sites are supported by an Administrative Coordinating Center. Setting of network goals is initiated by working groups: (1) Genomics, (2) Informatics, and (3) Consent & Community Consultation, which also includes active participation by investigators outside the eMERGE funded sites, and (4) Return of Results Oversight Committee. The Steering Committee, comprised of site PIs and representatives and NHGRI staff, meet three times per year, once per year with the External Scientific Panel.\n\n\nCURRENT PROGRESS\nThe primary site-specific phenotypes for which samples have undergone genome-wide association study (GWAS) genotyping are cataract and HDL, dementia, electrocardiographic QRS duration, peripheral arterial disease, and type 2 diabetes. A GWAS is also being undertaken for resistant hypertension in ≈ 2,000 additional samples identified across the network sites, to be added to data available for samples already genotyped. Funded by ARRA supplements, secondary phenotypes have been added at all sites to leverage the genotyping data, and hypothyroidism is being analyzed as a cross-network phenotype. Results are being posted in dbGaP. Other key eMERGE activities include evaluation of the issues associated with cross-site deployment of common algorithms to identify cases and controls in EMRs, data privacy of genomic and clinically-derived data, developing approaches for large-scale meta-analysis of GWAS data across five sites, and a community consultation and consent initiative at each site.\n\n\nFUTURE ACTIVITIES\nPlans are underway to expand the network in diversity of populations and incorporation of GWAS findings into clinical care.\n\n\nSUMMARY\nBy combining advanced clinical informatics, genome science, and community consultation, eMERGE represents a first step in the development of data-driven approaches to incorporate genomic information into routine healthcare delivery." }, { "pmid": "17502601", "title": "The human disease network.", "abstract": "A network of disorders and disease genes linked by known disorder-gene associations offers a platform to explore in a single graph-theoretic framework all known phenotype and disease gene associations, indicating the common genetic origin of many diseases. Genes associated with similar disorders show both higher likelihood of physical interactions between their products and higher expression profiling similarity for their transcripts, supporting the existence of distinct disease-specific functional modules. We find that essential human genes are likely to encode hub proteins and are expressed widely in most tissues. This suggests that disease genes also would play a central role in the human interactome. In contrast, we find that the vast majority of disease genes are nonessential and show no tendency to encode hub proteins, and their expression pattern indicates that they are localized in the functional periphery of the network. A selection-based model explains the observed difference between essential and disease genes and also suggests that diseases caused by somatic mutations should not be peripheral, a prediction we confirm for cancer genes." }, { "pmid": "25038555", "title": "Limestone: high-throughput candidate phenotype generation via tensor factorization.", "abstract": "The rapidly increasing availability of electronic health records (EHRs) from multiple heterogeneous sources has spearheaded the adoption of data-driven approaches for improved clinical research, decision making, prognosis, and patient management. Unfortunately, EHR data do not always directly and reliably map to medical concepts that clinical researchers need or use. Some recent studies have focused on EHR-derived phenotyping, which aims at mapping the EHR data to specific medical concepts; however, most of these approaches require labor intensive supervision from experienced clinical professionals. Furthermore, existing approaches are often disease-centric and specialized to the idiosyncrasies of the information technology and/or business practices of a single healthcare organization. In this paper, we propose Limestone, a nonnegative tensor factorization method to derive phenotype candidates with virtually no human supervision. Limestone represents the data source interactions naturally using tensors (a generalization of matrices). In particular, we investigate the interaction of diagnoses and medications among patients. The resulting tensor factors are reported as phenotype candidates that automatically reveal patient clusters on specific diagnoses and medications. Using the proposed method, multiple phenotypes can be identified simultaneously from data. We demonstrate the capability of Limestone on a cohort of 31,815 patient records from the Geisinger Health System. The dataset spans 7years of longitudinal patient records and was initially constructed for a heart failure onset prediction study. Our experiments demonstrate the robustness, stability, and the conciseness of Limestone-derived phenotypes. Our results show that using only 40 phenotypes, we can outperform the original 640 features (169 diagnosis categories and 471 medication types) to achieve an area under the receiver operator characteristic curve (AUC) of 0.720 (95% CI 0.715 to 0.725). Moreover, in consultation with a medical expert, we confirmed 82% of the top 50 candidates automatically extracted by Limestone are clinically meaningful." }, { "pmid": "22127105", "title": "Applying active learning to assertion classification of concepts in clinical text.", "abstract": "Supervised machine learning methods for clinical natural language processing (NLP) research require a large number of annotated samples, which are very expensive to build because of the involvement of physicians. Active learning, an approach that actively samples from a large pool, provides an alternative solution. Its major goal in classification is to reduce the annotation effort while maintaining the quality of the predictive model. However, few studies have investigated its uses in clinical NLP. This paper reports an application of active learning to a clinical text classification task: to determine the assertion status of clinical concepts. The annotated corpus for the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge was used in this study. We implemented several existing and newly developed active learning algorithms and assessed their uses. The outcome is reported in the global ALC score, based on the Area under the average Learning Curve of the AUC (Area Under the Curve) score. Results showed that when the same number of annotated samples was used, active learning strategies could generate better classification models (best ALC-0.7715) than the passive learning method (random sampling) (ALC-0.7411). Moreover, to achieve the same classification performance, active learning strategies required fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort." }, { "pmid": "16723398", "title": "Modularity and community structure in networks.", "abstract": "Many networks of interest in the sciences, including social networks, computer networks, and metabolic and regulatory networks, are found to divide naturally into communities or modules. The problem of detecting and characterizing this community structure is one of the outstanding issues in the study of networked systems. One highly effective approach is the optimization of the quality function known as \"modularity\" over the possible divisions of a network. Here I show that the modularity can be expressed in terms of the eigenvectors of a characteristic matrix for the network, which I call the modularity matrix, and that this expression leads to a spectral algorithm for community detection that returns results of demonstrably higher quality than competing methods in shorter running times. I illustrate the method with applications to several published network data sets." }, { "pmid": "25841328", "title": "Building bridges across electronic health record systems through inferred phenotypic topics.", "abstract": "OBJECTIVE\nData in electronic health records (EHRs) is being increasingly leveraged for secondary uses, ranging from biomedical association studies to comparative effectiveness. To perform studies at scale and transfer knowledge from one institution to another in a meaningful way, we need to harmonize the phenotypes in such systems. Traditionally, this has been accomplished through expert specification of phenotypes via standardized terminologies, such as billing codes. However, this approach may be biased by the experience and expectations of the experts, as well as the vocabulary used to describe such patients. The goal of this work is to develop a data-driven strategy to (1) infer phenotypic topics within patient populations and (2) assess the degree to which such topics facilitate a mapping across populations in disparate healthcare systems.\n\n\nMETHODS\nWe adapt a generative topic modeling strategy, based on latent Dirichlet allocation, to infer phenotypic topics. We utilize a variance analysis to assess the projection of a patient population from one healthcare system onto the topics learned from another system. The consistency of learned phenotypic topics was evaluated using (1) the similarity of topics, (2) the stability of a patient population across topics, and (3) the transferability of a topic across sites. We evaluated our approaches using four months of inpatient data from two geographically distinct healthcare systems: (1) Northwestern Memorial Hospital (NMH) and (2) Vanderbilt University Medical Center (VUMC).\n\n\nRESULTS\nThe method learned 25 phenotypic topics from each healthcare system. The average cosine similarity between matched topics across the two sites was 0.39, a remarkably high value given the very high dimensionality of the feature space. The average stability of VUMC and NMH patients across the topics of two sites was 0.988 and 0.812, respectively, as measured by the Pearson correlation coefficient. Also the VUMC and NMH topics have smaller variance of characterizing patient population of two sites than standard clinical terminologies (e.g., ICD9), suggesting they may be more reliably transferred across hospital systems.\n\n\nCONCLUSIONS\nPhenotypic topics learned from EHR data can be more stable and transferable than billing codes for characterizing the general status of a patient population. This suggests that EHR-based research may be able to leverage such phenotypic topics as variables when pooling patient populations in predictive models." }, { "pmid": "2579841", "title": "Platelet hyperfunction in patients with chronic airways obstruction.", "abstract": "Platelet aggregation (PA) and plasma beta-thromboglobulin (beta TG) values were evaluated in 40 patients affected by chronic airway obstruction (CAO). PA and beta TG were significantly higher than those observed in normal subjects. Beta TG plasma levels were inversely correlated with PaO2, directly with PaCO2 and [H+]. Two h after a venesection of 300-400 ml, no change of beta TG and PA was seen in 10 healthy subjects, while a significant increase of beta TG and PA values was observed in 29 patients. The investigation suggests that in patients with CAO in vivo platelet activation is present." } ]
Frontiers in Psychology
27721800
PMC5033969
10.3389/fpsyg.2016.01429
Referential Choice: Predictability and Its Limits
We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical.
Related WorkAs was discussed in Section “Discussion: Referential Choice Is Not Always Categorical”, referential variation and non-categoricity is clearly gaining attention in the modern linguistic, computational, and psycholinguistic literature. Referential variation may be due to the interlocutors’ perspective taking and their efforts to coordinate cognitive processes, see e.g., Koolen et al. (2011), Heller et al. (2012), and Baumann et al. (2014). A number of corpus-based studies and psycholinguistic studies explored various factors involved in the phenomenon of overspecification, occurring regularly in natural language (e.g., Kaiser et al., 2011; Hendriks, 2014; Vogels et al., 2014; Fukumura and van Gompel, 2015). Kibrik (2011, pp. 56–60) proposed to differentiate between three kinds of speaker’s referential strategies, differing in the extent to which the speaker takes the addressee’s actual cognitive state into account: egocentric, optimal, and overprotective. There is a series of recent studies addressing other aspects of referential variation, e.g., as a function of individual differences (Nieuwland and van Berkum, 2006), depending on age (Hughes and Allen, 2013; Hendriks et al., 2014) or gender (Arnold, 2015), under high cognitive load (van Rij et al., 2011; Vogels et al., 2014) and even under the left prefrontal cortex stimulation (Arnold et al., 2014). These studies, both on production and on comprehension of referential expressions, open up a whole new field in the exploration of reference.We discuss a more general kind of referential variation, probably associated with the intermediate level of referent activation. This kind of variation may occur in any discourse type. In order to test the non-categorical character of referential choice we previously conducted two experiments, based on the materials of our text corpus. Both of these experiments were somewhat similar to the experiment from Kibrik (1999), described in Section “Discussion: Referential Choice Is Not Always Categorical” above.In a comprehension experiment, Khudyakova (2012) tested the human ability to understand texts, in which the predicted referential device diverged from the original text. Nine texts from the corpus were randomly selected, such that they contained a predicted pronoun instead of an original full NP; text length did not exceed 250 words. In addition to the nine original texts, nine modified texts were created in which the original referential device (proper name) was replaced by the one predicted by the algorithm (pronoun). Two experimental lists were formed, each containing nine texts (four texts in an original version and five in a modified one, or vice versa), so that original and modified texts alternated between the two lists.The experiment was run online on Virtual Experiments platform3 with 60 participants with the expert level command of English. Each participant was asked to read all the nine texts one at a time, and answer a set of three questions after each text. Each text appeared in full on the screen, and disappeared when the participant was presented with three multiple-choice questions about referents in the text, beginning with a WH-word. Two of those were control questions, related to referents that did not create divergences. The third question was experimental: it concerned the referent in point, that is the one that was predicted by the algorithm differently from the original text. Questions were presented in a random order. Each participant thus answered 18 control questions and nine experimental questions. In the alleged instances of non-categorical referential choice, allowing both a full NP and a pronoun, experimental questions to proper names (original) and to pronouns (predicted) were expected to be answered with a comparable level of accuracy.The accuracy of the answers to the experimental questions to proper names, as well as to the control questions, was found to be 84%. In seven out of nine texts, experimental questions to pronouns were answered with the comparable accuracy of 80%. We propose that in these seven instances we deal exactly with non-categorical referential choice, probably associated with an intermediate level of referent activation. Two remaining instances may result from the algorithms’ errors.The processes of discourse production and comprehension are related but distinct, so we also conducted an editing experiment (Khudyakova et al., 2014), imitating referential choice as performed by a language speaker/writer. In the editing experiment, 47 participants with the expert level command of English were asked to read several texts from the corpus and choose all possible referential options for a referent at a certain point in discourse. Twenty seven texts from the corpus were selected for that study. The texts contained 31 critical points, in which the choice of the algorithm diverged from the one in the original text. At each critical point, as well as at two other points per text (control points), a choice was offered between a description, a proper name (where appropriate), and a pronoun. Both critical and control points did not include syntactically determined pronouns. The participants edited from 5 to 9 texts each, depending on the texts’ length. The task was to choose all appropriate options (possibly more than one). We found that in all texts at least two referential options were proposed for each point in question, both critical and control ones.The experiments on comprehension and editing demonstrated the variability of referential choice characteristic of the corpus texts. However, a methodological problem with these experiments was associated with the fact that each predicted referential expression was treated independently, whereas in real language use each referential expression depends on the previous context and creates a context for the subsequent referential expressions in the chain. In order to create texts that are more amenable to human evaluation, in the present study we introduce a flexible prediction script.
[ "18449327", "11239812", "16324792", "22389109", "23356244", "25068852", "22389129", "22389094", "22496107", "16956594", "3450848", "25911154", "22389170", "25471259" ]
[ { "pmid": "18449327", "title": "The effect of additional characters on choice of referring expression: Everyone counts.", "abstract": "Two story-telling experiments examine the process of choosing between pronouns and proper names in speaking. Such choices are traditionally attributed to speakers striving to make referring expressions maximally interpretable to addressees. The experiments revealed a novel effect: even when a pronoun would not be ambiguous, the presence of another character in the discourse decreased pronoun use and increased latencies to refer to the most prominent character in the discourse. In other words, speakers were more likely to call Minnie Minnie than shewhen Donald was also present. Even when the referent character appeared alone in the stimulus picture, the presence of another character in the preceding discourse reduced pronouns. Furthermore, pronoun use varied with features associated with the speaker's degree of focus on the preceding discourse (e.g., narrative style and disfluency). We attribute this effect to competition for attentional resources in the speaker's representation of the discourse." }, { "pmid": "11239812", "title": "Overlapping mechanisms of attention and spatial working memory.", "abstract": "Spatial selective attention and spatial working memory have largely been studied in isolation. Studies of spatial attention have provided clear evidence that observers can bias visual processing towards specific locations, enabling faster and better processing of information at those locations than at unattended locations. We present evidence supporting the view that this process of visual selection is a key component of rehearsal in spatial working memory. Thus, although working memory has sometimes been depicted as a storage system that emerges 'downstream' of early sensory processing, current evidence suggests that spatial rehearsal recruits top-down processes that modulate the earliest stages of visual analysis." }, { "pmid": "16324792", "title": "Interactions between attention and working memory.", "abstract": "Studies of attention and working memory address the fundamental limits in our ability to encode and maintain behaviorally relevant information, processes that are critical for goal-driven processing. Here we review our current understanding of the interactions between these processes, with a focus on how each construct encompasses a variety of dissociable phenomena. Attention facilitates target processing during both perceptual and postperceptual stages of processing, and functionally dissociated processes have been implicated in the maintenance of different kinds of information in working memory. Thus, although it is clear that these processes are closely intertwined, the nature of these interactions depends upon the specific variety of attention or working memory that is considered." }, { "pmid": "22389109", "title": "The interplay between gesture and speech in the production of referring expressions: investigating the tradeoff hypothesis.", "abstract": "The tradeoff hypothesis in the speech-gesture relationship claims that (a) when gesturing gets harder, speakers will rely relatively more on speech, and (b) when speaking gets harder, speakers will rely relatively more on gestures. We tested the second part of this hypothesis in an experimental collaborative referring paradigm where pairs of participants (directors and matchers) identified targets to each other from an array visible to both of them. We manipulated two factors known to affect the difficulty of speaking to assess their effects on the gesture rate per 100 words. The first factor, codability, is the ease with which targets can be described. The second factor, repetition, is whether the targets are old or new (having been already described once or twice). We also manipulated a third factor, mutual visibility, because it is known to affect the rate and type of gesture produced. None of the manipulations systematically affected the gesture rate. Our data are thus mostly inconsistent with the tradeoff hypothesis. However, the gesture rate was sensitive to concurrent features of referring expressions, suggesting that gesture parallels aspects of speech. We argue that the redundancy between speech and gesture is communicatively motivated." }, { "pmid": "23356244", "title": "Gender affects semantic competition: the effect of gender in a non-gender-marking language.", "abstract": "English speakers tend to produce fewer pronouns when a referential competitor has the same gender as the referent than otherwise. Traditionally, this gender congruence effect has been explained in terms of ambiguity avoidance (e.g., Arnold, Eisenband, Brown-Schmidt, & Trueswell, 2000; Fukumura, Van Gompel, & Pickering, 2010). However, an alternative hypothesis is that the competitor's gender congruence affects semantic competition, making the referent less accessible relative to when the competitor has a different gender (Arnold & Griffin, 2007). Experiment 1 found that even in Finnish, which is a nongendered language, the competitor's gender congruence results in fewer pronouns, supporting the semantic competition account. In Experiment 2, Finnish native speakers took part in an English version of the same experiment. The effect of gender congruence was larger in Experiment 2 than in Experiment 1, suggesting that the presence of a same-gender competitor resulted in a larger reduction in pronoun use in English than in Finnish. In contrast, other nonlinguistic similarity had similar effects in both experiments. This indicates that the effect of gender congruence in English is not entirely driven by semantic competition: Speakers also avoid gender-ambiguous pronouns." }, { "pmid": "25068852", "title": "Effects of order of mention and grammatical role on anaphor resolution.", "abstract": "A controversial issue in anaphoric processing has been whether processing preferences of anaphoric expressions are affected by the antecedent's grammatical role or surface position. Using eye tracking, Experiment 1 examined the comprehension of pronouns during reading, which revealed shorter reading times in the pronoun region and later regions when the antecedent was the subject than when it was the prepositional object. There was no effect of antecedent position. Experiment 2 showed that the choice between pronouns and repeated names during language production is also primarily affected by the antecedent's grammatical role. Experiment 3 examined the comprehension of repeated names, showing a clear effect of antecedent position. Reading times in the name region and in later regions were longer when the antecedent was 1st mentioned than 2nd mentioned, whereas the antecedent's grammatical role only affected regression measures in the name region, showing more processing difficulty with a subject than prepositional-object antecedent. Thus, the processing of pronouns is primarily driven by antecedent grammatical role rather than position, whereas the processing of repeated names is most strongly affected by position, suggesting that different representations and processing constraints underlie the processing of pronouns and names." }, { "pmid": "22389129", "title": "Underspecification of cognitive status in reference production: some empirical predictions.", "abstract": "Within the Givenness Hierarchy framework of Gundel, Hedberg, and Zacharski (1993), lexical items included in referring forms are assumed to conventionally encode two kinds of information: conceptual information about the speaker's intended referent and procedural information about the assumed cognitive status of that referent in the mind of the addressee, the latter encoded by various determiners and pronouns. This article focuses on effects of underspecification of cognitive status, establishing that, although salience and accessibility play an important role in reference processing, the Givenness Hierarchy itself is not a hierarchy of degrees of salience/accessibility, contrary to what has often been assumed. We thus show that the framework is able to account for a number of experimental results in the literature without making additional assumptions about form-specific constraints associated with different referring forms." }, { "pmid": "22389094", "title": "To name or to describe: shared knowledge affects referential form.", "abstract": "The notion of common ground is important for the production of referring expressions: In order for a referring expression to be felicitous, it has to be based on shared information. But determining what information is shared and what information is privileged may require gathering information from multiple sources, and constantly coordinating and updating them, which might be computationally too intensive to affect the earliest moments of production. Previous work has found that speakers produce overinformative referring expressions, which include privileged names, violating Grice's Maxims, and concluded that this is because they do not mark the distinction between shared and privileged information. We demonstrate that speakers are in fact quite effective in marking this distinction in the form of their utterances. Nonetheless, under certain circumstances, speakers choose to overspecify privileged names." }, { "pmid": "22496107", "title": "Managing ambiguity in reference generation: the role of surface structure.", "abstract": "This article explores the role of surface ambiguities in referring expressions, and how the risk of such ambiguities should be taken into account by an algorithm that generates referring expressions, if these expressions are to be optimally effective for a hearer. We focus on the ambiguities that arise when adjectives occur in coordinated structures. The central idea is to use statistical information about lexical co-occurrence to estimate which interpretation of a phrase is most likely for human readers, and to avoid generating phrases where misunderstandings are likely. Various aspects of the problem were explored in three experiments in which responses by human participants provided evidence about which reading was most likely for certain phrases, which phrases were deemed most suitable for particular referents, and the speed at which various phrases were read. We found a preference for ''clear'' expressions to ''unclear'' ones, but if several of the expressions are ''clear,'' then brief expressions are preferred over non-brief ones even though the brief ones are syntactically ambiguous and the non-brief ones are not; the notion of clarity was made precise using Kilgarriff's Word Sketches. We outline an implemented algorithm that generates noun phrases conforming to our hypotheses." }, { "pmid": "16956594", "title": "Individual differences and contextual bias in pronoun resolution: evidence from ERPs.", "abstract": "Although we usually have no trouble finding the right antecedent for a pronoun, the co-reference relations between pronouns and antecedents in everyday language are often 'formally' ambiguous. But a pronoun is only really ambiguous if a reader or listener indeed perceives it to be ambiguous. Whether this is the case may depend on at least two factors: the language processing skills of an individual reader, and the contextual bias towards one particular referential interpretation. In the current study, we used event related brain potentials (ERPs) to explore how both these factors affect the resolution of referentially ambiguous pronouns. We compared ERPs elicited by formally ambiguous and non-ambiguous pronouns that were embedded in simple sentences (e.g., \"Jennifer Lopez told Madonna that she had too much money.\"). Individual differences in language processing skills were assessed with the Reading Span task, while the contextual bias of each sentence (up to the critical pronoun) had been assessed in a referential cloze pretest. In line with earlier research, ambiguous pronouns elicited a sustained, frontal negative shift relative to non-ambiguous pronouns at the group-level. The size of this effect was correlated with Reading Span score, as well as with contextual bias. These results suggest that whether a reader perceives a formally ambiguous pronoun to be ambiguous is subtly co-determined by both individual language processing skills and contextual bias." }, { "pmid": "3450848", "title": "A comparison of the two one-sided tests procedure and the power approach for assessing the equivalence of average bioavailability.", "abstract": "The statistical test of hypothesis of no difference between the average bioavailabilities of two drug formulations, usually supplemented by an assessment of what the power of the statistical test would have been if the true averages had been inequivalent, continues to be used in the statistical analysis of bioavailability/bioequivalence studies. In the present article, this Power Approach (which in practice usually consists of testing the hypothesis of no difference at level 0.05 and requiring an estimated power of 0.80) is compared to another statistical approach, the Two One-Sided Tests Procedure, which leads to the same conclusion as the approach proposed by Westlake based on the usual (shortest) 1-2 alpha confidence interval for the true average difference. It is found that for the specific choice of alpha = 0.05 as the nominal level of the one-sided tests, the two one-sided tests procedure has uniformly superior properties to the power approach in most cases. The only cases where the power approach has superior properties when the true averages are equivalent correspond to cases where the chance of concluding equivalence with the power approach when the true averages are not equivalent exceeds 0.05. With appropriate choice of the nominal level of significance of the one-sided tests, the two one-sided tests procedure always has uniformly superior properties to the power approach. The two one-sided tests procedure is compared to the procedure proposed by Hauck and Anderson." }, { "pmid": "25911154", "title": "Working memory capacity and the scope and control of attention.", "abstract": "Complex span and visual arrays are two common measures of working memory capacity that are respectively treated as measures of attention control and storage capacity. A recent analysis of these tasks concluded that (1) complex span performance has a relatively stronger relationship to fluid intelligence and (2) this is due to the requirement that people engage control processes while performing this task. The present study examines the validity of these conclusions by examining two large data sets that include a more diverse set of visual arrays tasks and several measures of attention control. We conclude that complex span and visual arrays account for similar amounts of variance in fluid intelligence. The disparity relative to the earlier analysis is attributed to the present study involving a more complete measure of the latent ability underlying the performance of visual arrays. Moreover, we find that both types of working memory task have strong relationships to attention control. This indicates that the ability to engage attention in a controlled manner is a critical aspect of working memory capacity, regardless of the type of task that is used to measure this construct." }, { "pmid": "22389170", "title": "Toward a computational psycholinguistics of reference production.", "abstract": "This article introduces the topic ''Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference'' of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms." }, { "pmid": "25471259", "title": "How Cognitive Load Influences Speakers' Choice of Referring Expressions.", "abstract": "We report on two experiments investigating the effect of an increased cognitive load for speakers on the choice of referring expressions. Speakers produced story continuations to addressees, in which they referred to characters that were either salient or non-salient in the discourse. In Experiment 1, referents that were salient for the speaker were non-salient for the addressee, and vice versa. In Experiment 2, all discourse information was shared between speaker and addressee. Cognitive load was manipulated by the presence or absence of a secondary task for the speaker. The results show that speakers under load are more likely to produce pronouns, at least when referring to less salient referents. We take this finding as evidence that speakers under load have more difficulties taking discourse salience into account, resulting in the use of expressions that are more economical for themselves." } ]
Journal of Cheminformatics
28316646
PMC5034616
10.1186/s13321-016-0164-0
An ensemble model of QSAR tools for regulatory risk assessment
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.Electronic supplementary materialThe online version of this article (doi:10.1186/s13321-016-0164-0) contains supplementary material, which is available to authorized users.
Related workThere are studies that investigate methods for combining predictions from multiple QSAR tools to gain better predictive performance for various toxic endpoints: (1) Several QSAR models were developed and compared using different clustering algorithms (multiple linear regression, radial basis function neural network and support vector machines) to develop hybrid models for bioconcentration factor (BCF) prediction [17]; (2) QSAR models implementing cut-off rules were used to determine a reliable and conservative consensus prediction from two models implemented in VEGA [18] for BCF prediction [19]; (3) Predictive performance of four QSAR tools (Derek [20, 21], Leadscope [22], MultiCASE [23] and Toxtree [24]) were evaluated and compared to the standard Ames assay [25] for mutagenicity prediction. Pairwise hybrid models were then developed using AND (accepting positive results when both tools predict a positive) and OR combinations (accepting positive results when either one of the tool predicts a positive) [25–27]; (4) A similar AND/OR approach was implemented for the validation and construction of a hydrid QSAR model using MultiCASE and MDL-QSAR [28] tools for carcinogenicity prediction in rodents [29]. The work was extended using more tools (BioEpisteme [30], Leadscope PDM, and Derek) to construct hybrid models using majority consensus predictions in addition to AND/OR combinations [31].The results of these studies demonstrate that: (1) None of the QSAR tools perform significantly better than others, and they also differ in their predictive performance based upon the toxic endpoint and the chemical datasets under investigation, (2) Hybrid models have an improved overall predictive performance in comparison to individual QSAR tools, and (3) Consensus-positive predictions from more than one QSAR tool improved the identification of true positives. The underlying idea is that each QSAR model brings a different perspective of the complexity of the modeled biological system and combining them can improve the classification accuracy. However, consensus-positive methods are prone to introducing a conservative nature in discarding a potentially non-toxic chemical based on false positive prediction. Therefore, we propose an ensemble learning approach for combining predictions from multiple QSAR tools that addresses the drawbacks of consensus-positive predictions [32, 33]. Hybrid QSAR models using ensemble approaches have been developed for various biological endpoints like cancer classification and prediction of ADMET properties [34–36] but not for toxic endpoints. In this study, a Bayesian ensemble approach is investigated for carcinogenicity prediction, which is discussed in more details in the next section.
[ "17643090", "22771339", "15226221", "18405842", "13677480", "12896862", "15170526", "21504870", "8564854", "12896859", "22316153", "18954891", "23624006", "1679649", "11128088", "768755", "3418743", "21534561", "17703860", "20020914", "15921468", "21509786", "23343412", "15883903", "17283280" ]
[ { "pmid": "17643090", "title": "The application of discovery toxicology and pathology towards the design of safer pharmaceutical lead candidates.", "abstract": "Toxicity is a leading cause of attrition at all stages of the drug development process. The majority of safety-related attrition occurs preclinically, suggesting that approaches to identify 'predictable' preclinical safety liabilities earlier in the drug development process could lead to the design and/or selection of better drug candidates that have increased probabilities of becoming marketed drugs. In this Review, we discuss how the early application of preclinical safety assessment--both new molecular technologies as well as more established approaches such as standard repeat-dose rodent toxicology studies--can identify predictable safety issues earlier in the testing paradigm. The earlier identification of dose-limiting toxicities will provide chemists and toxicologists the opportunity to characterize the dose-limiting toxicities, determine structure-toxicity relationships and minimize or circumvent adverse safety liabilities." }, { "pmid": "22771339", "title": "Toxicokinetics as a key to the integrated toxicity risk assessment based primarily on non-animal approaches.", "abstract": "Toxicokinetics (TK) is the endpoint that informs about the penetration into and fate within the body of a toxic substance, including the possible emergence of metabolites. Traditionally, the data needed to understand those phenomena have been obtained in vivo. Currently, with a drive towards non-animal testing approaches, TK has been identified as a key element to integrate the results from in silico, in vitro and already available in vivo studies. TK is needed to estimate the range of target organ doses that can be expected from realistic human external exposure scenarios. This information is crucial for determining the dose/concentration range that should be used for in vitro testing. Vice versa, TK is necessary to convert the in vitro results, generated at tissue/cell or sub-cellular level, into dose response or potency information relating to the entire target organism, i.e. the human body (in vitro-in vivo extrapolation, IVIVE). Physiologically based toxicokinetic modelling (PBTK) is currently regarded as the most adequate approach to simulate human TK and extrapolate between in vitro and in vivo contexts. The fact that PBTK models are mechanism-based which allows them to be 'generic' to a certain extent (various extrapolations possible) has been critical for their success so far. The need for high-quality in vitro and in silico data on absorption, distribution, metabolism as well as excretion (ADME) as input for PBTK models to predict human dose-response curves is currently a bottleneck for integrative risk assessment." }, { "pmid": "18405842", "title": "Computational toxicology in drug development.", "abstract": "Computational tools for predicting toxicity have been envisaged for their potential to considerably impact the attrition rate of compounds in drug discovery and development. In silico techniques like knowledge-based expert systems (quantitative) structure activity relationship tools and modeling approaches may therefore help to significantly reduce drug development costs by succeeding in predicting adverse drug reactions in preclinical studies. It has been shown that commercial as well as proprietary systems can be successfully applied in the pharmaceutical industry. As the prediction has been exhaustively optimized for early safety-relevant endpoints like genotoxicity, future activities will now be directed to prevent the occurrence of undesired toxicity in patients by making these tools more relevant to human disease." }, { "pmid": "13677480", "title": "In silico prediction of drug toxicity.", "abstract": "It is essential, in order to minimise expensive drug failures due to toxicity being found in late development or even in clinical trials, to determine potential toxicity problems as early as possible. In view of the large libraries of compounds now being handled by combinatorial chemistry and high-throughput screening, identification of putative toxicity is advisable even before synthesis. Thus the use of predictive toxicology is called for. A number of in silico approaches to toxicity prediction are discussed. Quantitative structure-activity relationships (QSARs), relating mostly to specific chemical classes, have long been used for this purpose, and exist for a wide range of toxicity endpoints. However, QSARs also exist for the prediction of toxicity of very diverse libraries, although often such QSARs are of the classification type; that is, they predict simply whether or not a compound is toxic, and do not give an indication of the level of toxicity. Examples are given of all of these. A number of expert systems are available for toxicity prediction, most of them covering a range of toxicity endpoints. Those discussed include TOPKAT, CASE, DEREK, HazardExpert, OncoLogic and COMPACT. Comparative tests of the ability of these systems to predict carcinogenicity show that improvement is still needed. The consensus approach is recommended, whereby the results from several prediction systems are pooled." }, { "pmid": "12896862", "title": "Use of QSARs in international decision-making frameworks to predict health effects of chemical substances.", "abstract": "This article is a review of the use of quantitative (and qualitative) structure-activity relationships (QSARs and SARs) by regulatory agencies and authorities to predict acute toxicity, mutagenicity, carcinogenicity, and other health effects. A number of SAR and QSAR applications, by regulatory agencies and authorities, are reviewed. These include the use of simple QSAR analyses, as well as the use of multivariate QSARs, and a number of different expert system approaches." }, { "pmid": "15170526", "title": "Animal testing and alternative approaches for the human health risk assessment under the proposed new European chemicals regulation.", "abstract": "During the past 20 years the EU legislation for the notification of chemicals has focussed on new chemicals and at the same time failed to cover the evaluation of existing chemicals in Europe. Therefore, in a new EU chemicals policy (REACH, Registration, Evaluation and Authorization of Chemicals) the European Commission proposes to evaluate 30,000 chemicals within a period of 15 years. We are providing estimates of the testing requirements based on our personal experiences during the past 20 years. A realistic scenario based on an in-depth discussion of potential toxicological developments and an optimised \"tailor-made\" testing strategy shows that to meet the goals of the REACH policy, animal numbers may be significantly reduced below 10 million if industry would use in-house data from toxicity testing, which are confidential, if non-animal tests would be used, and if information from quantitative structure activity relationships (QSARs) would be applied in substance-tailored testing schemes. The procedures for evaluating the reproductive toxicity of chemicals have the strongest impact on the total number of animals bred for testing under REACH. We are assuming both an active collaboration with our colleagues in industry and substantial funding of the development and validation of advanced non-animal methods by the EU Commission, specifically in reproductive and developmental toxicity." }, { "pmid": "21504870", "title": "In silico toxicology models and databases as FDA Critical Path Initiative toolkits.", "abstract": "In silico toxicology methods are practical, evidence-based and high throughput, with varying accuracy. In silico approaches are of keen interest, not only to scientists in the private sector and to academic researchers worldwide, but also to the public. They are being increasingly evaluated and applied by regulators. Although there are foreseeable beneficial aspects--including maximising use of prior test data and the potential for minimising animal use for future toxicity testing--the primary use of in silico toxicology methods in the pharmaceutical sciences are as decision support information. It is possible for in silico toxicology methods to complement and strengthen the evidence for certain regulatory review processes, and to enhance risk management by supporting a more informed decision regarding priority setting for additional toxicological testing in research and product development. There are also several challenges with these continually evolving methods which clearly must be considered. This mini-review describes in silico methods that have been researched as Critical Path Initiative toolkits for predicting toxicities early in drug development based on prior knowledge derived from preclinical and clinical data at the US Food and Drug Administration, Center for Drug Evaluation and Research." }, { "pmid": "8564854", "title": "U.S. EPA regulatory perspectives on the use of QSAR for new and existing chemical evaluations.", "abstract": "As testing is not required, ecotoxicity or fate data are available for approximately 5% of the approximately 2,300 new chemicals/year (26,000 + total) submitted to the US-EPA. The EPA's Office of Pollution Prevention and Toxics (OPPT) regulatory program was forced to develop and rely upon QSARs to estimate the ecotoxicity and fate of most of the new chemicals evaluated for hazard and risk assessment. QSAR methods routinely result in ecotoxicity estimations of acute and chronic toxicity to fish, aquatic invertebrates, and algae, and in fate estimations of physical/chemical properties, degradation, and bioconcentration. The EPA's Toxic Substances Control Act (TSCA) Inventory of existing chemicals currently lists over 72,000 chemicals. Most existing chemicals also appear to have little or no ecotoxicity or fate data available and the OPPT new chemical QSAR methods now provide predictions and cross-checks of test data for the regulation of existing chemicals. Examples include the Toxics Release Inventory (TRI), the Design for the Environment (DfE), and the OECD/SIDS/HPV Programs. QSAR screening of the TSCA Inventory has prioritized thousands of existing chemicals for possible regulatory testing of: 1) persistent bioaccumulative chemicals, and 2) the high ecotoxicity of specific discrete organic chemicals." }, { "pmid": "12896859", "title": "Summary of a workshop on regulatory acceptance of (Q)SARs for human health and environmental endpoints.", "abstract": "The \"Workshop on Regulatory Use of (Q)SARs for Human Health and Environmental Endpoints,\" organized by the European Chemical Industry Council and the International Council of Chemical Associations, gathered more than 60 human health and environmental experts from industry, academia, and regulatory agencies from around the world. They agreed, especially industry and regulatory authorities, that the workshop initiated great potential for the further development and use of predictive models, that is, quantitative structure-activity relationships [(Q)SARs], for chemicals management in a much broader scope than is currently the case. To increase confidence in (Q)SAR predictions and minimization of their misuse, the workshop aimed to develop proposals for guidance and acceptability criteria. The workshop also described the broad outline of a system that would apply that guidance and acceptability criteria to a (Q)SAR when used for chemical management purposes, including priority setting, risk assessment, and classification and labeling." }, { "pmid": "22316153", "title": "The challenges involved in modeling toxicity data in silico: a review.", "abstract": "The percentage of failures in late pharmaceutical development due to toxicity has increased dramatically over the last decade or so, resulting in increased demand for new methods to rapidly and reliably predict the toxicity of compounds. In this review we discuss the challenges involved in both the building of in silico models on toxicology endpoints and their practical use in decision making. In particular, we will reflect upon the predictive strength of a number of different in silico models for a range of different endpoints, different approaches used to generate the models or rules, and limitations of the methods and the data used in model generation. Given that there exists no unique definition of a 'good' model, we will furthermore highlight the need to balance model complexity/interpretability with predictability, particularly in light of OECD/REACH guidelines. Special emphasis is put on the data and methods used to generate the in silico toxicology models, and their strengths and weaknesses are discussed. Switching to the applied side, we next review a number of toxicity endpoints, discussing the methods available to predict them and their general level of predictability (which very much depends on the endpoint considered). We conclude that, while in silico toxicology is a valuable tool to drug discovery scientists, much still needs to be done to, firstly, understand more completely the biological mechanisms for toxicity and, secondly, to generate more rapid in vitro models to screen compounds. With this biological understanding, and additional data available, our ability to generate more predictive in silico models should significantly improve in the future." }, { "pmid": "18954891", "title": "A new hybrid system of QSAR models for predicting bioconcentration factors (BCF).", "abstract": "The aim was to develop a reliable and practical quantitative structure-activity relationship (QSAR) model validated by strict conditions for predicting bioconcentration factors (BCF). We built up several QSAR models starting from a large data set of 473 heterogeneous chemicals, based on multiple linear regression (MLR), radial basis function neural network (RBFNN) and support vector machine (SVM) methods. To improve the results, we also applied a hybrid model, which gave better prediction than single models. All models were statistically analysed using strict criteria, including an external test set. The outliers were also examined to understand better in which cases large errors were to be expected and to improve the predictive models. The models offer more robust tools for regulatory purposes, on the basis of the statistical results and the quality check on the input data." }, { "pmid": "23624006", "title": "Integration of QSAR models for bioconcentration suitable for REACH.", "abstract": "QSAR (Quantitative Structure Activity Relationship) models can be a valuable alternative method to replace or reduce animal test required by REACH. In particular, some endpoints such as bioconcentration factor (BCF) are easier to predict and many useful models have been already developed. In this paper we describe how to integrate two popular BCF models to obtain more reliable predictions. In particular, the herein presented integrated model relies on the predictions of two among the most used BCF models (CAESAR and Meylan), together with the Applicability Domain Index (ADI) provided by the software VEGA. Using a set of simple rules, the integrated model selects the most reliable and conservative predictions and discards possible outliers. In this way, for the prediction of the 851 compounds included in the ANTARES BCF dataset, the integrated model discloses a R(2) (coefficient of determination) of 0.80, a RMSE (Root Mean Square Error) of 0.61 log units and a sensitivity of 76%, with a considerable improvement in respect to the CAESAR (R(2)=0.63; RMSE=0.84 log units; sensitivity 55%) and Meylan (R(2)=0.66; RMSE=0.77 log units; sensitivity 65%) without discarding too many predictions (118 out of 851). Importantly, considering solely the compounds within the new integrated ADI, the R(2) increased to 0.92, and the sensitivity to 85%, with a RMSE of 0.44 log units. Finally, the use of properly set safety thresholds applied for monitoring the so called \"suspicious\" compounds, which are those chemicals predicted in proximity of the border normally accepted to discern non-bioaccumulative from bioaccumulative substances, permitted to obtain an integrated model with sensitivity equal to 100%." }, { "pmid": "1679649", "title": "Computer prediction of possible toxic action from chemical structure; the DEREK system.", "abstract": "1. The development of DEREK, a computer-based expert system (derived from the LHASA chemical synthesis design program) for the qualitative prediction of possible toxic action of compounds on the basis of their chemical structure is described. 2. The system is able to perceive chemical sub-structures within molecules and relate these to a rulebase linking the sub-structures with likely types of toxicity. 3. Structures can be drawn in directly at a computer graphics terminal or retrieved automatically from a suitable in-house database. 4. The system is intended to aid the selection of compounds based on toxicological considerations, or separately to indicate specific toxicological properties to be tested for early in the evaluation of a compound, so saving time, money and some laboratory animals and resources." }, { "pmid": "11128088", "title": "LeadScope: software for exploring large sets of screening data.", "abstract": "Modern approaches to drug discovery have dramatically increased the speed and quantity of compounds that are made and tested for potential potency. The task of collecting, organizing, and assimilating this information is a major bottleneck in the discovery of new drugs. We have developed LeadScope a novel, interactive computer program for visualizing, browsing, and interpreting chemical and biological screening data that can assist pharmaceutical scientists in finding promising drug candidates. The software organizes the chemical data by structural features familiar to medicinal chemists. Graphs are used to summarize the data, and structural classes are highlighted that are statistically correlated with biological activity." }, { "pmid": "3418743", "title": "Computer-assisted analysis of interlaboratory Ames test variability.", "abstract": "The interlaboratory Ames test variability of the Salmonella/microsome assay was studied by comparing 12 sets of results generated in the frame of the International Program for the Evaluation of Short-Term Tests for Carcinogens (IPESTTC). The strategy for the simultaneous analysis of test performance similarities over the whole range of chemicals involved the use of multivariate data analysis methods. The various sets of Ames test data were contrasted both against each other and against a selection of other IPESTTC tests. These tests were chosen as representing a wide range of different patterns of response to the chemicals. This approach allowed us both to estimate the absolute extent of the interlaboratory variability of the Ames test, and to contrast its range of variability with the overall spread of test responses. Ten of the 12 laboratories showed a high degree of experimental reproducibility; two laboratories generated clearly differentiated results, probably related to differences in the protocol of metabolic activation. The analysis also indicated that assays such as Escherichia coli WP2 and chromosomal aberrations in Chinese hamster ovary cells generated sets of results within the variability range of Salmonella; in this sense they were not complementary to Salmonella." }, { "pmid": "21534561", "title": "Comparative evaluation of in silico systems for ames test mutagenicity prediction: scope and limitations.", "abstract": "The predictive power of four commonly used in silico tools for mutagenicity prediction (DEREK, Toxtree, MC4PC, and Leadscope MA) was evaluated in a comparative manner using a large, high-quality data set, comprising both public and proprietary data (F. Hoffmann-La Roche) from 9,681 compounds tested in the Ames assay. Satisfactory performance statistics were observed on public data (accuracy, 66.4-75.4%; sensitivity, 65.2-85.2%; specificity, 53.1-82.9%), whereas a significant deterioration of sensitivity was observed in the Roche data (accuracy, 73.1-85.5%; sensitivity, 17.4-43.4%; specificity, 77.5-93.9%). As a general tendency, expert systems showed higher sensitivity and lower specificity when compared to QSAR-based tools, which displayed the opposite behavior. Possible reasons for the performance differences between the public and Roche data, relating to the experimentally inactive to active compound ratio and the different coverage of chemical space, are thoroughly discussed. Examples of peculiar chemical classes enriched in false negative or false positive predictions are given, and the results of the combined use of the prediction systems are described." }, { "pmid": "17703860", "title": "Comparison of MC4PC and MDL-QSAR rodent carcinogenicity predictions and the enhancement of predictive performance by combining QSAR models.", "abstract": "This report presents a comparison of the predictive performance of MC4PC and MDL-QSAR software as well as a method for combining the predictions from both programs to increase overall accuracy. The conclusions are based on 10 x 10% leave-many-out internal cross-validation studies using 1540 training set compounds with 2-year rodent carcinogenicity findings. The models were generated using the same weight of evidence scoring method previously developed [Matthews, E.J., Contrera, J.F., 1998. A new highly specific method for predicting the carcinogenic potential of pharmaceuticals in rodents using enhanced MCASE QSAR-ES software. Regul. Toxicol. Pharmacol. 28, 242-264.]. Although MC4PC and MDL-QSAR use different algorithms, their overall predictive performance was remarkably similar. Respectively, the sensitivity of MC4PC and MDL-QSAR was 61 and 63%, specificity was 71 and 75%, and concordance was 66 and 69%. Coverage for both programs was over 95% and receiver operator characteristic (ROC) intercept statistic values were above 2.00. The software programs had complimentary coverage with none of the 1540 compounds being uncovered by both MC4PC and MDL-QSAR. Merging MC4PC and MDL-QSAR predictions improved the overall predictive performance. Consensus sensitivity increased to 67%, specificity to 84%, concordance to 76%, and ROC to 4.31. Consensus rules can be tuned to reflect the priorities of the user, so that greater emphasis may be placed on predictions with high sensitivity/low false negative rates or high specificity/low false positive rates. Sensitivity was optimized to 75% by reclassifying all compounds predicted to be positive in MC4PC or MDL-QSAR as positive, and specificity was optimized to 89% by reclassifying all compounds predicted negative in MC4PC or MDL-QSAR as negative." }, { "pmid": "20020914", "title": "Combined Use of MC4PC, MDL-QSAR, BioEpisteme, Leadscope PDM, and Derek for Windows Software to Achieve High-Performance, High-Confidence, Mode of Action-Based Predictions of Chemical Carcinogenesis in Rodents.", "abstract": "ABSTRACT This report describes a coordinated use of four quantitative structure-activity relationship (QSAR) programs and an expert knowledge base system to predict the occurrence and the mode of action of chemical carcinogenesis in rodents. QSAR models were based upon a weight-of-evidence paradigm of carcinogenic activity that was linked to chemical structures (n = 1,572). Identical training data sets were configured for four QSAR programs (MC4PC, MDL-QSAR, BioEpisteme, and Leadscope PDM), and QSAR models were constructed for the male rat, female rat, composite rat, male mouse, female mouse, composite mouse, and rodent composite endpoints. Model predictions were adjusted to favor high specificity (>80%). Performance was shown to be affected by the method used to score carcinogenicity study findings and the ratio of the number of active to inactive chemicals in the QSAR training data set. Results demonstrated that the four QSAR programs were complementary, each detecting different profiles of carcinogens. Accepting any positive prediction from two programs showed better overall performance than either of the single programs alone; specificity, sensitivity, and Chi-square values were 72.9%, 65.9%, and 223, respectively, compared to 84.5%, 45.8%, and 151. Accepting only consensus-positive predictions using any two programs had the best overall performance and higher confidence; specificity, sensitivity, and Chi-square values were 85.3%, 57.5%, and 287, respectively. Specific examples are provided to demonstrate that consensus-positive predictions of carcinogenicity by two QSAR programs identified both genotoxic and nongenotoxic carcinogens and that they detected 98.7% of the carcinogens linked in this study to Derek for Windows defined modes of action." }, { "pmid": "15921468", "title": "Boosting: an ensemble learning tool for compound classification and QSAR modeling.", "abstract": "A classification and regression tool, J. H. Friedman's Stochastic Gradient Boosting (SGB), is applied to predicting a compound's quantitative or categorical biological activity based on a quantitative description of the compound's molecular structure. Stochastic Gradient Boosting is a procedure for building a sequence of models, for instance regression trees (as in this paper), whose outputs are combined to form a predicted quantity, either an estimate of the biological activity, or a class label to which a molecule belongs. In particular, the SGB procedure builds a model in a stage-wise manner by fitting each tree to the gradient of a loss function: e.g., squared error for regression and binomial log-likelihood for classification. The values of the gradient are computed for each sample in the training set, but only a random sample of these gradients is used at each stage. (Friedman showed that the well-known boosting algorithm, AdaBoost of Freund and Schapire, could be considered as a particular case of SGB.) The SGB method is used to analyze 10 cheminformatics data sets, most of which are publicly available. The results show that SGB's performance is comparable to that of Random Forest, another ensemble learning method, and are generally competitive with or superior to those of other QSAR methods. The use of SGB's variable importance with partial dependence plots for model interpretation is also illustrated." }, { "pmid": "21509786", "title": "Ensemble QSAR: a QSAR method based on conformational ensembles and metric descriptors.", "abstract": "Quantitative structure-activity relationship (QSAR) is the most versatile tool in computer-assisted molecular design. One conceptual drawback seen in QSAR approaches is the \"one chemical-one structure-one parameter value\" dogma where the model development is based on physicochemical description for a single molecular conformation, while ignoring the rest of the conformational space. It is well known that molecules have several low-energy conformations populated at physiological temperature, and each conformer makes a significant impact on associated properties such as biological activity. At the level of molecular interaction, the dynamics around the molecular structure is of prime essence rather than the average structure. As a step toward understanding the role of these discrete microscopic states in biological activity, we have put together a theoretically rigorous and computationally tractable formalism coined as eQSAR. In this approach, the biological activity is modeled as a function of physicochemical description for a selected set of low-energy conformers, rather than that's for a single lowest energy conformation. Eigenvalues derived from the \"Physicochemical property integrated distance matrices\" (PD-matrices) that encompass both 3D structure and physicochemical properties, have been used as descriptors; is a novel addition. eQSAR is validated on three peptide datasets and explicitly elaborated for bradykinin-potentiating peptides. The conformational ensembles were generated by a simple molecular dynamics and consensus dynamics approaches. The eQSAR models are statistically significant and possess the ability to select the most biologically relevant conformation(s) with the relevant physicochemical attributes that have the greatest meaning for description of the biological activity." }, { "pmid": "23343412", "title": "Interpretable, probability-based confidence metric for continuous quantitative structure-activity relationship models.", "abstract": "A great deal of research has gone into the development of robust confidence in prediction and applicability domain (AD) measures for quantitative structure-activity relationship (QSAR) models in recent years. Much of the attention has historically focused on structural similarity, which can be defined in many forms and flavors. A concept that is frequently overlooked in the realm of the QSAR applicability domain is how the local activity landscape plays a role in how accurate a prediction is or is not. In this work, we describe an approach that pairs information about both the chemical similarity and activity landscape of a test compound's neighborhood into a single calculated confidence value. We also present an approach for converting this value into an interpretable confidence metric that has a simple and informative meaning across data sets. The approach will be introduced to the reader in the context of models built upon four diverse literature data sets. The steps we will outline include the definition of similarity used to determine nearest neighbors (NN), how we incorporate the NN activity landscape with a similarity-weighted root-mean-square distance (wRMSD) value, and how that value is then calibrated to generate an intuitive confidence metric for prospective application. Finally, we will illustrate the prospective performance of the approach on five proprietary models whose predictions and confidence metrics have been tracked for more than a year." }, { "pmid": "15883903", "title": "Understanding interobserver agreement: the kappa statistic.", "abstract": "Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described." } ]
JMIR Medical Education
27731840
PMC5041364
10.2196/mededu.4789
A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education
BackgroundPreparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education.ObjectiveThe aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making.MethodsA deductive case study approach was applied to develop the conceptual model.ResultsThe analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators.ConclusionsThe model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach.
Related WorkEducational Informatics is a multidisciplinary research area that uses Information and Communication Technology (ICT) in education. It has many sub-disciplines, a number of which focus on learning or teaching (eg, simulation), and others that focus on administration of educational programs (eg, curriculum mapping and analytics). Within the area of analytics, it is possible to identify work focusing on the technical challenges (eg, educational data mining), the educational challenges (eg, Learning analytics), or the administrative challenges (eg, Academic- and Action analytics) [8].The Academic- and Learning analytics fields emerged in early 2005. The major factors driving their development are technological, educational, and political. Development of the necessary techniques for data-driven analytics and decision support began in the early 20thcentury. Higher education institutions are collecting more data than ever before. However, most of these data are not used at all, or they are used for purposes other than addressing strategic questions. Educational institutions face bigger challenges than ever before, including increasing requirements for excellence, internationalization, the emergence of new sciences, new markets, and new educational forms. The potential benefits of analytics for applications such as resource optimization and automatization of multiple administrative functions (alerts, reports, and recommendations) have been described in the literature [9,10].
[ "2294449", "11141156", "15523387", "20054502", "25160372" ]
[ { "pmid": "20054502", "title": "Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics. First Revision.", "abstract": "Objective: The International Medical Informatics Association (IMIA) agreed on revising the existing international recommendations in health informatics/medical informatics education. These should help to establish courses, course tracks or even complete programs in this field, to further develop existing educational activities in the various nations and to support international initiatives concerning education in biomedical and health informatics (BMHI), particularly international activities in educating BMHI specialists and the sharing of courseware. Method: An IMIA task force, nominated in 2006, worked on updating the recommendations' first version. These updates have been broadly discussed and refined by members of IMIA's National Member Societies, IMIA's Academic Institutional Members and by members of IMIA's Working Group on Health and Medical Informatics Education. Results and Conclusions: The IMIA recommendations center on educational needs for health care professionals to acquire knowledge and skills in information processing and information and communication technology. The educational needs are described as a three-dimensional framework. The dimensions are: 1) professionals in health care (e.g. physicians, nurses, BMHI professionals), 2) type of specialization in BMHI (IT users, BMHI specialists), and 3) stage of career progression (bachelor, master, doctorate). Learning outcomes are defined in terms of knowledge and practical skills for health care professionals in their role a) as IT user and b) as BMHI specialist. Recommendations are given for courses/course tracks in BMHI as part of educational programs in medicine, nursing, health care management, dentistry, pharmacy, public health, health record administration, and informatics/computer science as well as for dedicated programs in BMHI (with bachelor, master or doctor degree). To support education in BMHI, IMIA offers to award a certificate for high-quality BMHI education. It supports information exchange on programs and courses in BMHI through its Working Group on Health and Medical Informatics Education." }, { "pmid": "25160372", "title": "Big data in medical informatics: improving education through visual analytics.", "abstract": "A continuous effort to improve healthcare education today is currently driven from the need to create competent health professionals able to meet healthcare demands. Limited research reporting how educational data manipulation can help in healthcare education improvement. The emerging research field of visual analytics has the advantage to combine big data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognise visual patterns. The aim of this study was therefore to explore novel ways of representing curriculum and educational data using visual analytics. Three approaches of visualization and representation of educational data were presented. Five competencies at undergraduate medical program level addressed in courses were identified to inaccurately correspond to higher education board competencies. Different visual representations seem to have a potential in impacting on the ability to perceive entities and connections in the curriculum data." } ]
Scientific Reports
27686748
PMC5043229
10.1038/srep34181
Accuracy Improvement for Predicting Parkinson’s Disease Progression
Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease.
Related WorkFor effective diagnosis of Parkinson’s Disease (PD), different types of classification methods were examined by Das30. The computation of the performance score of the classifiers was based on various evaluation methods. According to the results of application scores, they found that Neural Networks (NNs) classifier obtains the best result which was 92.9% of accuracy. Bhattacharya and Bhatia31 used data mining tool, Weka, to pre-process the dataset on which they used Support Vector Machine (SVM) to distinguish people with PD from the healthy people. They applied LIBSVM to find the best possible accuracy on different kernel values for the experimental dataset. They measured the accuracy of models using Receiver Operating Characteristic (ROC) curve variation. Chen et al.13 presented a diagnosis PD system by using Fuzzy K-Nearest Neighbor (FKNN). They compared the results of developed FKNN-based system with the results of SVM based approaches. They also employed PCA to further improve the PD diagnosis accuracy. Using a 10-fold cross-validation, the experimental results demonstrated that the FKNN-based system significantly improve the classification accuracy (96.07%) and outperforms SVM-based approaches and other methods in the literature. Ozcift32 developed a classification method based on SVM and obtained about 97% accuracy for the prediction of PD progression. Polat29 examined the Fuzzy C-Means (FCM) Clustering-based Feature Weighting (FCMFW) for the detection of PD. The author used K-NN classifier for classification purpose and applied it on the experimental dataset with different values of k. Åström and Koker33 proposed a prediction system that is based on parallel NNs. The output of each NN was evaluated by using a rule-based system for the final decision. The experiments on the proposed method showed that a set of nine parallel NNs yielded an improvement of 8.4% on the prediction of PD compared to a single unique network. Li et al.34 proposed a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, they applied Principal Component Analysis (PCA) to extract the optimal subset of features and SVM for predicting PD. Guo et al.35 developed a hybrid system using Expectation Maximization (EM) and Genetic Programming (GP) to construct learning feature functions from the features of voice in PD context. Using projection based learning for meta-cognitive Radial Basis Function Network (PBL-McRBFN), Babu and Suresh (2013) implemented a gene expression based method for the prediction of PD progression. The capabilities of the Random Forest algorithm was tested by Peterek et al.36 for the prediction of PD progression. A hybrid intelligent system was proposed by Hariharan et al.24 using clustering (Gaussian mixture model), feature reduction and classification methods. Froelich et al.23 investigated the diagnosis of PD on the basis of characteristic features of a person’s voice. They classified individual voice samples to a sick or to a healthy person using decision trees. Then they used the threshold-based method for the final diagnosis of a person thorough previously classified voice samples. The value of the threshold determines the minimal number of individual voice samples (indicating the disease) that is required for the reliable diagnosis of a sick person. Using real-world data, the achievement of accuracy of classification was 90%. Eskidere et al.25 studied the performance of SVM, Least Square SVM (LS-SVM), Multilayer Perceptron NN (MLPNN), and General Regression NN (GRNN) regression methods to remote tracking of PD progression. Results of their study demonstrated that the best accuracy is obtained by LS-SVM in relation to the other three methods, and outperforms latest proposed regression methods published in the literature. In a study by Guo et al.10 in Central South of Mainland China, sixteen Single-Nucleotide Polymorphisms (SNPs) located in the 8 genes and/or loci (SNCA, LRRK2, MAPT, GBA, HLA-DR, BST1, PARK16, and PARK17) were analysed in a cohort of 1061 PD, and 1066 Normal healthy participants. This study established that Rep1, rs356165, and rs11931074 in SNCA gene, G2385R in LRRK2 gene, rs4698412 in BST1 gene, rs1564282 in PARK17, and L444P in GBA gene have an independent and combined significant effect on PD. As a final point, this study has reported that SNPs in these 4 genes have more pronounced effect on PD.From the literature on the prediction of PD progression, we found that at the moment there is no implementation of Principal Component Analysis (PCA), Gaussian mixture model with Expectation Maximization (EM) and prediction methods in PD diagnosis. This research accordingly tries to develop an intelligent system for PD diagnosis based on these approaches. Hence, in this paper, we incorporate the robust machine learning techniques and propose a new hybrid intelligent system using PCA, Gaussian mixture model with EM and prediction methods. Overall, in comparison with research efforts found in the literature, in this research:A comparative study is conducted between two robust supervised prediction techniques, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR).EM is used for data clustering. The clustering problem has been addressed in many diseases diagnosis systems1337. This reflects its broad appeal and usefulness as one of the steps in exploratory health data analysis. In this study, EM clustering is used as an unsupervised classification method to cluster the data of experimental dataset into similar groups.ANFIS and SVR are used for prediction of PD progression.PCA is used for dimensionality reduction and dealing with the multi-collinearity problem in the experimental data. This technique has been used in developing in many disease diagnosis systems to eliminate the redundant information in the original health data272829.A hybrid intelligent system is proposed using EM, PCA and prediction methods, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression.
[ "20082967", "23711400", "27184740", "22387368", "23154271", "21556377", "26618044", "25623333", "22387592", "25064009", "12777365", "22656184", "22733427", "22502984", "23182747", "24485390", "21547504", "21493051", "26019610", "26828106" ]
[ { "pmid": "20082967", "title": "Predicting Parkinson's disease - why, when, and how?", "abstract": "Parkinson's disease (PD) is a progressive disorder with a presymptomatic interval; that is, there is a period during which the pathologic process has begun, but motor signs required for the clinical diagnosis are absent. There is considerable interest in discovering markers to diagnose this preclinical stage. Current predictive marker development stems mainly from two principles; first, that pathologic processes occur in lower brainstem regions before substantia nigra involvement and second, that redundancy and compensatory responses cause symptoms to emerge only after advanced degeneration. Decreased olfaction has recently been demonstrated to predict PD in prospective pathologic studies, although the lead time may be relatively short and the positive predictive value and specificity are low. Screening patients for depression and personality changes, autonomic symptoms, subtle motor dysfunction on quantitative testing, sleepiness and insomnia are other potential simple markers. More invasive measures such as detailed autonomic testing, cardiac MIBG-scintigraphy, transcranial ultrasound, and dopaminergic functional imaging may be especially useful in those at high risk or for further defining risk in those identified through primary screening. Despite intriguing leads, direct testing of preclinical markers has been limited, mainly because there is no reliable way to identify preclinical disease. Idiopathic RBD is characterized by loss of normal atonia with REM sleep. Approximately 50% of affected individuals will develop PD or dementia within 10 years. This provides an unprecedented opportunity to test potential predictive markers before clinical disease onset. The results of marker testing in idiopathic RBD with its implications for disease prediction will be detailed." }, { "pmid": "23711400", "title": "Unveiling relevant non-motor Parkinson's disease severity symptoms using a machine learning approach.", "abstract": "OBJECTIVE\nIs it possible to predict the severity staging of a Parkinson's disease (PD) patient using scores of non-motor symptoms? This is the kickoff question for a machine learning approach to classify two widely known PD severity indexes using individual tests from a broad set of non-motor PD clinical scales only.\n\n\nMETHODS\nThe Hoehn & Yahr index and clinical impression of severity index are global measures of PD severity. They constitute the labels to be assigned in two supervised classification problems using only non-motor symptom tests as predictor variables. Such predictors come from a wide range of PD symptoms, such as cognitive impairment, psychiatric complications, autonomic dysfunction or sleep disturbance. The classification was coupled with a feature subset selection task using an advanced evolutionary algorithm, namely an estimation of distribution algorithm.\n\n\nRESULTS\nResults show how five different classification paradigms using a wrapper feature selection scheme are capable of predicting each of the class variables with estimated accuracy in the range of 72-92%. In addition, classification into the main three severity categories (mild, moderate and severe) was split into dichotomic problems where binary classifiers perform better and select different subsets of non-motor symptoms. The number of jointly selected symptoms throughout the whole process was low, suggesting a link between the selected non-motor symptoms and the general severity of the disease.\n\n\nCONCLUSION\nQuantitative results are discussed from a medical point of view, reflecting a clear translation to the clinical manifestations of PD. Moreover, results include a brief panel of non-motor symptoms that could help clinical practitioners to identify patients who are at different stages of the disease from a limited set of symptoms, such as hallucinations, fainting, inability to control body sphincters or believing in unlikely facts." }, { "pmid": "27184740", "title": "Modified serpinA1 as risk marker for Parkinson's disease dementia: Analysis of baseline data.", "abstract": "Early detection of dementia in Parkinson disease is a prerequisite for preventive therapeutic approaches. Modified serpinA1 in cerebrospinal fluid (CSF) was suggested as an early biomarker for differentiation between Parkinson patients with (PDD) or without dementia (PD). Within this study we aimed to further explore the diagnostic value of serpinA1. We applied a newly developed nanoscale method for the detection of serpinA1 based on automated capillary isoelectric focusing (CIEF). A clinical sample of 102 subjects including neurologically healthy controls (CON), PD and PDD patients was investigated. Seven serpinA1 isoforms of different charge were detected in CSF from all three diagnostic groups. The mean CSF signals of the most acidic serpinA1 isoform differed significantly (p < 0.01) between PDD (n = 29) and PD (n = 37) or CON (n = 36). Patients above the cut-off of 6.4 have a more than six times higher risk for an association with dementia compared to patients below the cut off. We propose this serpinA1 CIEF-immunoassay as a novel tool in predicting cognitive impairment in PD patients and therefore for patient stratification in therapeutic trials." }, { "pmid": "22387368", "title": "Neurotransmitter receptors and cognitive dysfunction in Alzheimer's disease and Parkinson's disease.", "abstract": "Cognitive dysfunction is one of the most typical characteristics in various neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease (advanced stage). Although several mechanisms like neuronal apoptosis and inflammatory responses have been recognized to be involved in the pathogenesis of cognitive dysfunction in these diseases, recent studies on neurodegeneration and cognitive dysfunction have demonstrated a significant impact of receptor modulation on cognitive changes. The pathological alterations in various receptors appear to contribute to cognitive impairment and/or deterioration with correlation to diversified mechanisms. This article recapitulates the present understandings and concepts underlying the modulation of different receptors in human beings and various experimental models of Alzheimer's disease and Parkinson's disease as well as a conceptual update on the underlying mechanisms. Specific roles of serotonin, adrenaline, acetylcholine, dopamine receptors, and N-methyl-D-aspartate receptors in Alzheimer's disease and Parkinson's disease will be interactively discussed. Complex mechanisms involved in their signaling pathways in the cognitive dysfunction associated with the neurodegenerative diseases will also be addressed. Substantial evidence has suggested that those receptors are crucial neuroregulators contributing to cognitive pathology and complicated correlations exist between those receptors and the expression of cognitive capacities. The pathological alterations in the receptors would, therefore, contribute to cognitive impairments and/or deterioration in Alzheimer's disease and Parkinson's disease. Future research may shed light on new clues for the treatment of cognitive dysfunction in neurodegenerative diseases by targeting specific alterations in these receptors and their signal transduction pathways in the frontal-striatal, fronto-striato-thalamic, and mesolimbic circuitries." }, { "pmid": "23154271", "title": "Serum uric acid in patients with Parkinson's disease and vascular parkinsonism: a cross-sectional study.", "abstract": "BACKGROUND\nElevation of serum uric acid (UA) is correlated with a decreased risk of Parkinson's disease (PD); however, the association and clinical relevance of serum UA levels in patients with PD and vascular parkinsonism (VP) are unknown.\n\n\nOBJECTIVE\nWe performed a cross-sectional study of 160 Chinese patients with PD and VP to determine whether UA levels in patients could predict the outcomes.\n\n\nMETHODS\nSerum UA levels were divided into quartiles and the association between UA and the severity of PD or VP was investigated in each quartile.\n\n\nRESULTS\nThe serum levels of UA in PD were significantly lower than those in normal subjects and VP. The serum UA levels in PD patients were significantly correlated with some clinical parameters. Strong correlations were observed in male PD patients, but significant correlations were observed only between UA and the non-motor symptoms (NMS) of burden of sleep/fatigue and mood in female PD patients. PD patients in the lowest quartile of serum UA levels had significant correlations between UA and the unified Parkinson's disease rating scale, the modified Hoehn and Yahr staging scale and NMS burden for attention/memory.\n\n\nCONCLUSION\nOur findings support the hypothesis that subjects with low serum UA levels may be more prone to developing PD and indicate that the inverse relationship between UA and severity of PD was robust for men but weak for women. Our results strongly imply that either low serum UA level is a deteriorative predictor or that serum UA level serves as an indirect biomarker of prediction in PD but not in VP patients." }, { "pmid": "21556377", "title": "The combination of homocysteine and C-reactive protein predicts the outcomes of Chinese patients with Parkinson's disease and vascular parkinsonism.", "abstract": "BACKGROUND\nThe elevation of plasma homocysteine (Hcy) and C-reactive protein (CRP) has been correlated to an increased risk of Parkinson's disease (PD) or vascular diseases. The association and clinical relevance of a combined assessment of Hcy and CRP levels in patients with PD and vascular parkinsonism (VP) are unknown.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe performed a cross-sectional study of 88 Chinese patients with PD and VP using a clinical interview and the measurement of plasma Hcy and CRP to determine if Hcy and CRP levels in patients may predict the outcomes of the motor status, non-motor symptoms (NMS), disease severity, and cognitive declines. Each patient's NMS, cognitive deficit, disease severity, and motor status were assessed by the Nonmotor Symptoms Scale (NMSS), Mini-Mental State Examination (MMSE), the modified Hoehn and Yahr staging scale (H&Y), and the unified Parkinson's disease rating scale part III (UPDRS III), respectively. We found that 100% of patients with PD and VP presented with NMS. The UPDRS III significantly correlated with CRP (P = 0.011) and NMSS (P = 0.042) in PD patients. The H&Y was also correlated with Hcy (P = 0.002), CRP (P = 0.000), and NMSS (P = 0.023) in PD patients. In VP patients, the UPDRS III and H&Y were not significantly associated with NMSS, Hcy, CRP, or MMSE. Strong correlations were observed between Hcy and NMSS as well as between CRP and NMSS in PD and VP.\n\n\nCONCLUSIONS/SIGNIFICANCE\nOur findings support the hypothesis that Hcy and CRP play important roles in the pathogenesis of PD. The combination of Hcy and CRP may be used to assess the progression of PD and VP. Whether or not anti-inflammatory medication could be used in the management of PD and VP will produce an interesting topic for further research." }, { "pmid": "26618044", "title": "Low Cerebral Glucose Metabolism: A Potential Predictor for the Severity of Vascular Parkinsonism and Parkinson's Disease.", "abstract": "This study explored the association between cerebral metabolic rates of glucose (CMRGlc) and the severity of Vascular Parkinsonism (VP) and Parkinson's disease (PD). A cross-sectional study was performed to compare CMRGlc in normal subjects vs. VP and PD patients. Twelve normal subjects, 22 VP, and 11 PD patients were evaluated with the H&Y and MMSE, and underwent 18F-FDG measurements. Pearson's correlations were used to identify potential associations between the severity of VP/PD and CMRGlc. A pronounced reduction of CMRGlc in the frontal lobe and caudate putamen was detected in patients with VP and PD when compared with normal subjects. The VP patients displayed a slight CMRGlc decrease in the caudate putamen and frontal lobe in comparison with PD patients. These decreases in CMRGlc in the frontal lobe and caudate putamen were significantly correlated with the VP patients' H&Y, UPDRS II, UPDRS III, MMSE, cardiovascular, and attention/memory scores. Similarly, significant correlations were observed in patients with PD. This is the first clinical study finding strong evidence for an association between low cerebral glucose metabolism and the severity of VP and PD. Our findings suggest that these changes in glucose metabolism in the frontal lobe and caudate putamen may underlie the pathophysiological mechanisms of VP and PD. As the scramble to find imaging biomarkers or predictors of the disease intensifies, a better understanding of the roles of cerebral glucose metabolism may give us insight into the pathogenesis of VP and PD." }, { "pmid": "25623333", "title": "Polygenic determinants of Parkinson's disease in a Chinese population.", "abstract": "It has been reported that some single-nucleotide polymorphisms (SNPs) are associated with the risk of Parkinson's disease (PD), but whether a combination of these SNPs would have a stronger association with PD than any individual SNP is unknown. Sixteen SNPs located in the 8 genes and/or loci (SNCA, LRRK2, MAPT, GBA, HLA-DR, BST1, PARK16, and PARK17) were analyzed in a Chinese cohort consisting of 1061 well-characterized PD patients and 1066 control subjects from Central South of Mainland China. We found that Rep1, rs356165, and rs11931074 in SNCA gene; G2385R in LRRK2 gene; rs4698412 in BST1 gene; rs1564282 in PARK17; and L444P in GBA gene were associated with PD with adjustment of sex and age (p < 0.05) in the analysis of 16 variants. PD risk increased when Rep1 and rs11931074, G2385R, rs1564282, rs4698412; rs11931074 and G2385R, rs1564282, rs4698412; G2385R and rs1564282, rs4698412; and rs1564282 and rs4698412 were combined for the association analysis. In addition, PD risk increased cumulatively with the increasing number of variants (odds ratio for carrying 3 variants, 3.494). In summary, we confirmed that Rep1, rs356165, and rs11931074 in SNCA gene, G2385R in LRRK2 gene, rs4698412 in BST1 gene, rs1564282 in PARK17, and L444P in GBA gene have an independent and combined significant association with PD. SNPs in these 4 genes have a cumulative effect with PD." }, { "pmid": "22387592", "title": "Speech impairment in a large sample of patients with Parkinson's disease.", "abstract": "This study classified speech impairment in 200 patients with Parkinson's disease (PD) into five levels of overall severity and described the corresponding type (voice, articulation, fluency) and extent (rated on a five-point scale) of impairment for each level. From two-minute conversational speech samples, parameters of voice, fluency and articulation were assessed by two trained-raters. Voice was found to be the leading deficit, most frequently affected and impaired to a greater extent than other features in the initial stages. Articulatory and fluency deficits manifested later, articulatory impairment matching voice impairment in frequency and extent at the `Severe' stage. At the final stage of `Profound' impairment, articulation was the most frequently impaired feature at the lowest level of performance. This study illustrates the prominence of voice and articulatory speech motor control deficits, and draws parallels with deficits of motor set and motor set instability in skeletal controls of gait and handwriting." }, { "pmid": "25064009", "title": "Large-scale meta-analysis of genome-wide association data identifies six new risk loci for Parkinson's disease.", "abstract": "We conducted a meta-analysis of Parkinson's disease genome-wide association studies using a common set of 7,893,274 variants across 13,708 cases and 95,282 controls. Twenty-six loci were identified as having genome-wide significant association; these and 6 additional previously reported loci were then tested in an independent set of 5,353 cases and 5,551 controls. Of the 32 tested SNPs, 24 replicated, including 6 newly identified loci. Conditional analyses within loci showed that four loci, including GBA, GAK-DGKQ, SNCA and the HLA region, contain a secondary independent risk variant. In total, we identified and replicated 28 independent risk variants for Parkinson's disease across 24 loci. Although the effect of each individual locus was small, risk profile analysis showed substantial cumulative risk in a comparison of the highest and lowest quintiles of genetic risk (odds ratio (OR) = 3.31, 95% confidence interval (CI) = 2.55-4.30; P = 2 × 10(-16)). We also show six risk loci associated with proximal gene expression or DNA methylation." }, { "pmid": "12777365", "title": "Incidence of Parkinson's disease: variation by age, gender, and race/ethnicity.", "abstract": "The goal of this study was to estimate the incidence of Parkinson's disease by age, gender, and ethnicity. Newly diagnosed Parkinson's disease cases in 1994-1995 were identified among members of the Kaiser Permanente Medical Care Program of Northern California, a large health maintenance organization. Each case met modified standardized criteria/Hughes diagnostic criteria as applied by a movement disorder specialist. Incidence rates per 100,000 person-years were calculated using the Kaiser Permanente membership information as the denominator and adjusted for age and/or gender using the direct method of standardization. A total of 588 newly diagnosed (incident) cases of Parkinson's disease were identified, which gave an overall annualized age- and gender-adjusted incidence rate of 13.4 per 100,000 (95% confidence interval (CI): 11.4, 15.5). The incidence rapidly increased over the age of 60 years, with only 4% of the cases being under the age of 50 years. The rate for men (19.0 per 100,000, 95% CI: 16.1, 21.8) was 91% higher than that for women (9.9 per 100,000, 95% CI: 7.6, 12.2). The age- and gender-adjusted rate per 100,000 was highest among Hispanics (16.6, 95% CI: 12.0, 21.3), followed by non-Hispanic Whites (13.6, 95% CI: 11.5, 15.7), Asians (11.3, 95% CI: 7.2, 15.3), and Blacks (10.2, 95% CI: 6.4, 14.0). These data suggest that the incidence of Parkinson's disease varies by race/ethnicity." }, { "pmid": "22656184", "title": "Musculoskeletal problems as an initial manifestation of Parkinson's disease: a retrospective study.", "abstract": "OBJECTIVE\nThe purpose of this study was to review the prevalence of musculoskeletal pain in the prodromal phase of PD, before the PD diagnosis is made.\n\n\nMETHODS\nA retrospective review of 82 PD patients was performed. Hospital inpatient notes and outpatient clinic admission notes were reviewed. The initial complaints prompting patients to seek medical attention were noted, as were the initial diagnoses. The symptoms were considered retrospectively to be associated with PD.\n\n\nRESULTS\nMusculoskeletal pain was present as a prodromal PD symptom in 27 (33%) cases initially diagnosed with osteoarthritis, degenerative spinal disease, and frozen shoulder. The mean time from the initial symptom appearance to dopaminergic treatment was 6.6 years in the musculoskeletal pain group and 2.3 years in the group with typical PD signs. Significant improvement of musculoskeletal pain after the initiation of dopaminergic treatment was present in 23 (85%) cases.\n\n\nCONCLUSIONS\nOf the PD patients who went on to develop motor features of PD, one third manifested musculoskeletal pain as the initial symptom. A good response to L-DOPA therapy was seen in 85% of cases presenting with musculoskeletal pain. Our findings suggest that musculoskeletal pain may be a significant feature in earlier PD stages." }, { "pmid": "22733427", "title": "Rapid eye movement sleep behavior disorder and subtypes of Parkinson's disease.", "abstract": "Numerous studies have explored the potential relationship between rapid eye movement sleep behavior disorder (RBD) and manifestations of PD. Our aim was to perform an expanded extensive assessment of motor and nonmotor manifestations in PD to identify whether RBD was associated with differences in the nature and severity of these manifestations. PD patients underwent polysomnography (PSG) to diagnose the presence of RBD. Participants then underwent an extensive evaluation by a movement disorders specialist blinded to PSG results. Measures of disease severity, quantitative motor indices, motor subtypes, therapy complications, and autonomic, psychiatric, visual, and olfactory dysfunction were assessed and compared using regression analysis, adjusting for disease duration, age, and sex. Of 98 included patients, 54 had RBD and 44 did not. PD patients with RBD were older (P = 0.034) and were more likely to be male (P < 0.001). On regression analysis, the most consistent links between RBD and PD were a higher systolic blood pressure (BP) change while standing (-23.9 ± 13.9 versus -3.5 ± 10.9; P < 0.001), a higher orthostatic symptom score (0.89 ± 0.82 versus 0.44 ± 0.66; P = 0.003), and a higher frequency of freezing (43% versus 14%; P = 0.011). A systolic BP drop >10 could identify PD patients with RBD with 81% sensitivity and 86% specificity. In addition, there was a probable relationship between RBD and nontremor predominant subtype of PD (P = 0.04), increased frequency of falls (P = 0.009), and depression (P = 0.009). Our results support previous findings that RBD is a multifaceted phenomenon in PD. Patients with PD who have RBD tend to have specific motor and nonmotor manifestations, especially orthostatic hypotension." }, { "pmid": "22502984", "title": "A PC-based system for predicting movement from deep brain signals in Parkinson's disease.", "abstract": "There is much current interest in deep brain stimulation (DBS) of the subthalamic nucleus (STN) for the treatment of Parkinson's disease (PD). This type of surgery has enabled unprecedented access to deep brain signals in the awake human. In this paper we present an easy-to-use computer based system for recording, displaying, archiving, and processing electrophysiological signals from the STN. The system was developed for predicting self-paced hand-movements in real-time via the online processing of the electrophysiological activity of the STN. It is hoped that such a computerised system might have clinical and experimental applications. For example, those sites within the STN most relevant to the processing of voluntary movement could be identified through the predictive value of their activities with respect to the timing of future movement." }, { "pmid": "23182747", "title": "Accurate telemonitoring of Parkinson's disease diagnosis using robust inference system.", "abstract": "This work presents more precise computational methods for improving the diagnosis of Parkinson's disease based on the detection of dysphonia. New methods are presented for enhanced evaluation and recognize Parkinson's disease affected patients at early stage. Analysis is performed with significant level of error tolerance rate and established our results with corrected T-test. Here new ensembles and other machine learning methods consisting of multinomial logistic regression classifier with Haar wavelets transformation as projection filter that outperform logistic regression is used. Finally a novel and reliable inference system is presented for early recognition of people affected by this disease and presents a new measure of the severity of the disease. Feature selection method is based on Support Vector Machines and ranker search method. Performance analysis of each model is compared to the existing methods and examines the main advancements and concludes with propitious results. Reliable methods are proposed for treating Parkinson's disease that includes sparse multinomial logistic regression, Bayesian network, Support Vector Machines, Artificial Neural Networks, Boosting methods and their ensembles. The study aim at improving the quality of Parkinson's disease treatment by tracking them and reinforce the viability of cost effective, regular and precise telemonitoring application." }, { "pmid": "24485390", "title": "A new hybrid intelligent system for accurate detection of Parkinson's disease.", "abstract": "Elderly people are commonly affected by Parkinson's disease (PD) which is one of the most common neurodegenerative disorders due to the loss of dopamine-producing brain cells. People with PD's (PWP) may have difficulty in walking, talking or completing other simple tasks. Variety of medications is available to treat PD. Recently, researchers have found that voice signals recorded from the PWP is becoming a useful tool to differentiate them from healthy controls. Several dysphonia features, feature reduction/selection techniques and classification algorithms were proposed by researchers in the literature to detect PD. In this paper, hybrid intelligent system is proposed which includes feature pre-processing using Model-based clustering (Gaussian mixture model), feature reduction/selection using principal component analysis (PCA), linear discriminant analysis (LDA), sequential forward selection (SFS) and sequential backward selection (SBS), and classification using three supervised classifiers such as least-square support vector machine (LS-SVM), probabilistic neural network (PNN) and general regression neural network (GRNN). PD dataset was used from University of California-Irvine (UCI) machine learning database. The strength of the proposed method has been evaluated through several performance measures. The experimental results show that the combination of feature pre-processing, feature reduction/selection methods and classification gives a maximum classification accuracy of 100% for the Parkinson's dataset." }, { "pmid": "21547504", "title": "SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease.", "abstract": "Parkinson disease (PD) is an age-related deterioration of certain nerve systems, which affects movement, balance, and muscle control of clients. PD is one of the common diseases which affect 1% of people older than 60 years. A new classification scheme based on support vector machine (SVM) selected features to train rotation forest (RF) ensemble classifiers is presented for improving diagnosis of PD. The dataset contains records of voice measurements from 31 people, 23 with PD and each record in the dataset is defined with 22 features. The diagnosis model first makes use of a linear SVM to select ten most relevant features from 22. As a second step of the classification model, six different classifiers are trained with the subset of features. Subsequently, at the third step, the accuracies of classifiers are improved by the utilization of RF ensemble classification strategy. The results of the experiments are evaluated using three metrics; classification accuracy (ACC), Kappa Error (KE) and Area under the Receiver Operating Characteristic (ROC) Curve (AUC). Performance measures of two base classifiers, i.e. KStar and IBk, demonstrated an apparent increase in PD diagnosis accuracy compared to similar studies in literature. After all, application of RF ensemble classification scheme improved PD diagnosis in 5 of 6 classifiers significantly. We, numerically, obtained about 97% accuracy in RF ensemble of IBk (a K-Nearest Neighbor variant) algorithm, which is a quite high performance for Parkinson disease diagnosis." }, { "pmid": "21493051", "title": "A fuzzy-based data transformation for feature extraction to increase classification performance with small medical data sets.", "abstract": "OBJECTIVE\nMedical data sets are usually small and have very high dimensionality. Too many attributes will make the analysis less efficient and will not necessarily increase accuracy, while too few data will decrease the modeling stability. Consequently, the main objective of this study is to extract the optimal subset of features to increase analytical performance when the data set is small.\n\n\nMETHODS\nThis paper proposes a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, this study applies principal component analysis (PCA) to extract the optimal subset of features. Finally, we use the transformed data with these optimal features as the input data for a learning tool, a support vector machine (SVM). Six medical data sets: Pima Indians' diabetes, Wisconsin diagnostic breast cancer, Parkinson disease, echocardiogram, BUPA liver disorders dataset, and bladder cancer cases in Taiwan, are employed to illustrate the approach presented in this paper.\n\n\nRESULTS\nThis research uses the t-test to evaluate the classification accuracy for a single data set; and uses the Friedman test to show the proposed method is better than other methods over the multiple data sets. The experiment results indicate that the proposed method has better classification performance than either PCA or kernel principal component analysis (KPCA) when the data set is small, and suggest creating new purpose-related information to improve the analysis performance.\n\n\nCONCLUSION\nThis paper has shown that feature extraction is important as a function of feature selection for efficient data analysis. When the data set is small, using the fuzzy-based transformation method presented in this work to increase the information available produces better results than the PCA and KPCA approaches." }, { "pmid": "26019610", "title": "Clustering performance comparison using K-means and expectation maximization algorithms.", "abstract": "Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results." } ]
Scientific Reports
27694950
PMC5046183
10.1038/srep33985
Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images
Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin & Eosin, CD31 & Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method.
Previous Related Work and Novel ContributionsTable 1 enumerates some recent techniques for nuclei detection. Most approaches typically tend to use image derived cues, such as color/intensity2528293031, edges192124323334, texture35, self learned features1336, and symmetry22242737.The color and texture-based methods require consistent color/texture appearance for the individual nuclei in order to work optimally. The method presented in ref. 31 applied the Laplacian of Gaussian (LoG) filter to detect the initial seed points representing nuclei. However, due to the uneven distribution of nuclear stain, the response of LoG filter may not reflect the true nuclear center. Filipczuk et al. applied circular Hough transform to detect the nuclear center34, however the circular Hough transform assumes that the shape of the underlying region of interest can be represented by a parametric function, i.e., circle or ellipse. In poorly stained tissue images, the circular Hough transform is likely to fail due to the great variations in appearance of nuclear edges and the presence of clusters of edge fragments.Recently, there has been substantial interest in developing and employing DL based methods for nuclei detection in histology images1336. The DL methods are supervised classification methods that typically employ multiple layers of neural networks for object detection and recognition. They can be easily extended and employed for multiple different classification tasks. Recently a number of DL based approaches have been proposed for image analysis and classification applications in digital pathology1336. For instance, Xu et al. proposed a stacked sparse autoencoder (SSAE) to detect nuclei in breast cancer tissue images. They showed that the DL scheme was able to outperform hand-crafted features on multi-site/stain histology images. However, DL methods required a large number of dedicated training samples since the learning process requires a large number of parameters to be learned. These approaches therefore tend to be heavily biased and sensitive to the choice of the training set.The key idea behind voting based techniques is to cluster circular symmetries along the radial line/inverse gradient direction on an object’s contour in order to infer the center of the object of interest. An illustrative example is shown in Fig. 2(a,b). Figure 2(a) shows a synthetic phantom nucleus with foreground color as grey, and the background color in white. A few sample pixels/points on the nuclei contour with their inverse gradient directions are shown as blue arrows in Fig. 2. Figure 2(b) illustrates the voting procedure with three selected pixels on the contour. Note that for each pixel, a dotted triangle is used to represent an active voting area. The region where three voting areas converge can be thought of as a region with a high likelihood of containing a nuclear center.Several effective symmetric voting-based techniques have been developed employing variants of the same principal. Parvin et al.27 proposed a multi-pass voting (MPV) method to calculate the centroid of overlapping nuclei. Qi et al.22 proposed a single pass voting (SPV) technique followed by a mean-shift procedure to calculate the seed points of overlapping nuclei. In order to further improve the efficiency of the approach, Xu et al.24 proposed a technique based on an elliptic descriptor and improved single pass voting for nuclei via a seed point detection scheme. This initial nuclear detection step was followed by a marker-controlled watershed algorithm to segment nuclei in H&E stained histology images. In practice, the MPV procedure tends to yield more accurate results compared to the SPV procedure in terms of nuclei detection. The SPV procedure may help improve overall efficiency of nuclear detection24, however, it needs an additional mean-shift clustering step to identify the local maxima in the voting map. This additional clustering step requires estimating additional parameters and increases overall model complexity.Since existing voting-based techniques typically utilize edge features, nuclei with hollow interiors could result in incorrect voting and hence in generation of a spurious detection result. One example is shown in Fig. 2(c), where we can see a color image, its corresponding edge map and one of the nuclei, denoted as A. Nucleus A has a hollow interior so that it has two contours, an inner and an outer contour, which results in two edge fragments in the edge map (see second row of Fig. 2(c)). For the outer nuclear contour, the inverse gradients are pointing inwards, whereas for the inner nuclear contour, the inverse gradients are pointing outwards. As one may expect, the inverse gradient obtained from the inner contour minimally contributes towards identifying the nuclear centroid (because the active voting area appears to be outside the nucleus, while the nuclear center should be within the nucleus). Another synthetic example of a nucleus with a hollow interior is shown in Fig. 2(c), and a few inverse gradient directions are drawn on the inner contour. In most cases, those inverse gradients from the inner contour will lead to a spurious result in regions of clustered nuclei. In Fig. 2(e), three synthetic nuclei with hollow regions are shown. It is clear that due to the vicinity of these three nuclei, the highlighted red circle region has received a large number of votes and thus could lead to a potential false positive detection. In section, we will show that in real histopathologic images, existing voting-based techniques tend to generate many false positive detection results.In this paper, we present a Multi-Pass Adaptive Voting (MPAV) method. The MPAV is a voting based technique which adaptively selects and refines the gradient information from the image to infer the location of nuclear centroids. The schematic for the MPAV is illustrated in Fig. 3. The MPAV consists of three modules: gradient field generation, refinement of the gradient field, and multi-pass voting. Given a color image, a gradient field is generated by using image smoothing and edge detection. In the second module, the gradient field is refined, gradients whose direction leads away from the center of the nuclei are removed or corrected. The refined gradient field is then utilized in a multi-pass voting module to guide each edge pixels for generating the nuclear voting map. Finally, a global threshold is applied on the voting map to obtain candidate nuclear centroids. The details of each module are discussed in the next section and the notations and symbols used in this paper are summarized in Table 2.
[ "26186772", "26167385", "24505786", "24145650", "23392336", "23157334", "21333490", "20491597", "26208307", "22614727", "25203987", "22498689", "21383925", "20172780", "22167559", "25192578", "24608059", "23221815", "20359767", "19884070", "20656653", "21947866", "23912498", "18288618", "24557687", "25958195", "21869365", "24704158" ]
[ { "pmid": "26186772", "title": "Feature Importance in Nonlinear Embeddings (FINE): Applications in Digital Pathology.", "abstract": "Quantitative histomorphometry (QH) refers to the process of computationally modeling disease appearance on digital pathology images by extracting hundreds of image features and using them to predict disease presence or outcome. Since constructing a robust and interpretable classifier is challenging in a high dimensional feature space, dimensionality reduction (DR) is often implemented prior to classifier construction. However, when DR is performed it can be challenging to quantify the contribution of each of the original features to the final classification result. We have previously presented a method for scoring features based on their importance for classification on an embedding derived via principal components analysis (PCA). However, nonlinear DR involves the eigen-decomposition of a kernel matrix rather than the data itself, compounding the issue of classifier interpretability. In this paper we present feature importance in nonlinear embeddings (FINE), an extension of our PCA-based feature scoring method to kernel PCA (KPCA), as well as several NLDR algorithms that can be cast as variants of KPCA. FINE is applied to four digital pathology datasets to identify key QH features for predicting the risk of breast and prostate cancer recurrence. Measures of nuclear and glandular architecture and clusteredness were found to play an important role in predicting the likelihood of recurrence of both breast and prostate cancers. Compared to the t-test, Fisher score, and Gini index, FINE was able to identify a stable set of features that provide good classification accuracy on four publicly available datasets from the NIPS 2003 Feature Selection Challenge." }, { "pmid": "26167385", "title": "Content-based image retrieval of digitized histopathology in boosted spectrally embedded spaces.", "abstract": "CONTEXT\nContent-based image retrieval (CBIR) systems allow for retrieval of images from within a database that are similar in visual content to a query image. This is useful for digital pathology, where text-based descriptors alone might be inadequate to accurately describe image content. By representing images via a set of quantitative image descriptors, the similarity between a query image with respect to archived, annotated images in a database can be computed and the most similar images retrieved. Recently, non-linear dimensionality reduction methods have become popular for embedding high-dimensional data into a reduced-dimensional space while preserving local object adjacencies, thereby allowing for object similarity to be determined more accurately in the reduced-dimensional space. However, most dimensionality reduction methods implicitly assume, in computing the reduced-dimensional representation, that all features are equally important.\n\n\nAIMS\nIn this paper we present boosted spectral embedding(BoSE), which utilizes a boosted distance metric to selectively weight individual features (based on training data) to subsequently map the data into a reduced-dimensional space.\n\n\nSETTINGS AND DESIGN\nBoSE is evaluated against spectral embedding (SE) (which employs equal feature weighting) in the context of CBIR of digitized prostate and breast cancer histopathology images.\n\n\nMATERIALS AND METHODS\nThe following datasets, which were comprised of a total of 154 hematoxylin and eosin stained histopathology images, were used: (1) Prostate cancer histopathology (benign vs. malignant), (2) estrogen receptor (ER) + breast cancer histopathology (low vs. high grade), and (3) HER2+ breast cancer histopathology (low vs. high levels of lymphocytic infiltration).\n\n\nSTATISTICAL ANALYSIS USED\nWe plotted and calculated the area under precision-recall curves (AUPRC) and calculated classification accuracy using the Random Forest classifier.\n\n\nRESULTS\nBoSE outperformed SE both in terms of CBIR-based (area under the precision-recall curve) and classifier-based (classification accuracy) on average across all of the dimensions tested for all three datasets: (1) Prostate cancer histopathology (AUPRC: BoSE = 0.79, SE = 0.63; Accuracy: BoSE = 0.93, SE = 0.80), (2) ER + breast cancer histopathology (AUPRC: BoSE = 0.79, SE = 0.68; Accuracy: BoSE = 0.96, SE = 0.96), and (3) HER2+ breast cancer histopathology (AUPRC: BoSE = 0.54, SE = 0.44; Accuracy: BoSE = 0.93, SE = 0.91).\n\n\nCONCLUSION\nOur results suggest that BoSE could serve as an important tool for CBIR and classification of high-dimensional biomedical data." }, { "pmid": "24505786", "title": "Cell orientation entropy (COrE): predicting biochemical recurrence from prostate cancer tissue microarrays.", "abstract": "We introduce a novel feature descriptor to describe cancer cells called Cell Orientation Entropy (COrE). The main objective of this work is to employ COrE to quantitatively model disorder of cell/nuclear orientation within local neighborhoods and evaluate whether these measurements of directional disorder are correlated with biochemical recurrence (BCR) in prostate cancer (CaP) patients. COrE has a number of novel attributes that are unique to digital pathology image analysis. Firstly, it is the first rigorous attempt to quantitatively model cell/nuclear orientation. Secondly, it provides for modeling of local cell networks via construction of subgraphs. Thirdly, it allows for quantifying the disorder in local cell orientation via second order statistical features. We evaluated the ability of 39 COrE features to capture the characteristics of cell orientation in CaP tissue microarray (TMA) images in order to predict 10 year BCR in men with CaP following radical prostatectomy. Randomized 3-fold cross-validation via a random forest classifier evaluated on a combination of COrE and other nuclear features achieved an accuracy of 82.7 +/- 3.1% on a dataset of 19 BCR and 20 non-recurrence patients. Our results suggest that COrE features could be extended to characterize disease states in other histological cancer images in addition to prostate cancer." }, { "pmid": "24145650", "title": "A quantitative histomorphometric classifier (QuHbIC) identifies aggressive versus indolent p16-positive oropharyngeal squamous cell carcinoma.", "abstract": "Human papillomavirus-related (p16-positive) oropharyngeal squamous cell carcinoma patients develop recurrent disease, mostly distant metastasis, in approximately 10% of cases, and the remaining patients, despite cure, can have major morbidity from treatment. Identifying patients with aggressive versus indolent tumors is critical. Hematoxylin and eosin-stained slides of a microarray cohort of p16-positive oropharyngeal squamous cell carcinoma cases were digitally scanned. A novel cluster cell graph was constructed using the nuclei as vertices to characterize and measure spatial distribution and cell clustering. A series of topological features defined on each node of the subgraph were analyzed, and a random forest decision tree classifier was developed. The classifier (QuHbIC) was validated over 25 runs of 3-fold cross-validation using case subsets for independent training and testing. Nineteen (11.9%) of the 160 patients on the array developed recurrence. QuHbIC correctly predicted outcomes in 140 patients (87.5% accuracy). There were 23 positive patients, of whom 11 developed recurrence (47.8% positive predictive value), and 137 negative patients, of whom only 8 developed recurrence (94.2% negative predictive value). The best other predictive features were stage T4 (18 patients; 83.1% accuracy) and N3 nodal disease (10 patients; 88.6% accuracy). QuHbIC-positive patients had poorer overall, disease-free, and disease-specific survival (P<0.001 for each). In multivariate analysis, QuHbIC-positive patients still showed significantly poorer disease-free and disease-specific survival, independent of all other variables. In summary, using just tiny hematoxylin and eosin punches, a computer-aided histomorphometric classifier (QuHbIC) can strongly predict recurrence risk. With prospective validation, this testing may be useful to stratify patients into different treatment groups." }, { "pmid": "23392336", "title": "Multi-field-of-view framework for distinguishing tumor grade in ER+ breast cancer from entire histopathology slides.", "abstract": "Modified Bloom-Richardson (mBR) grading is known to have prognostic value in breast cancer (BCa), yet its use in clinical practice has been limited by intra- and interobserver variability. The development of a computerized system to distinguish mBR grade from entire estrogen receptor-positive (ER+) BCa histopathology slides will help clinicians identify grading discrepancies and improve overall confidence in the diagnostic result. In this paper, we isolate salient image features characterizing tumor morphology and texture to differentiate entire hematoxylin and eosin (H and E) stained histopathology slides based on mBR grade. The features are used in conjunction with a novel multi-field-of-view (multi-FOV) classifier--a whole-slide classifier that extracts features from a multitude of FOVs of varying sizes--to identify important image features at different FOV sizes. Image features utilized include those related to the spatial arrangement of cancer nuclei (i.e., nuclear architecture) and the textural patterns within nuclei (i.e., nuclear texture). Using slides from 126 ER+ patients (46 low, 60 intermediate, and 20 high mBR grade), our grading system was able to distinguish low versus high, low versus intermediate, and intermediate versus high grade patients with area under curve values of 0.93, 0.72, and 0.74, respectively. Our results suggest that the multi-FOV classifier is able to 1) successfully discriminate low, medium, and high mBR grade and 2) identify specific image features at different FOV sizes that are important for distinguishing mBR grade in H and E stained ER+ BCa histology slides." }, { "pmid": "23157334", "title": "Digital imaging in pathology: whole-slide imaging and beyond.", "abstract": "Digital imaging in pathology has undergone an exponential period of growth and expansion catalyzed by changes in imaging hardware and gains in computational processing. Today, digitization of entire glass slides at near the optical resolution limits of light can occur in 60 s. Whole slides can be imaged in fluorescence or by use of multispectral imaging systems. Computational algorithms have been developed for cytometric analysis of cells and proteins in subcellular locations by use of multiplexed antibody staining protocols. Digital imaging is unlocking the potential to integrate primary image features into high-dimensional genomic assays by moving microscopic analysis into the digital age. This review highlights the emerging field of digital pathology and explores the methods and analytic approaches being developed for the application and use of these methods in clinical care and research settings." }, { "pmid": "21333490", "title": "Computer-aided prognosis: predicting patient and disease outcome via quantitative fusion of multi-scale, multi-modal data.", "abstract": "Computer-aided prognosis (CAP) is a new and exciting complement to the field of computer-aided diagnosis (CAD) and involves developing and applying computerized image analysis and multi-modal data fusion algorithms to digitized patient data (e.g. imaging, tissue, genomic) for helping physicians predict disease outcome and patient survival. While a number of data channels, ranging from the macro (e.g. MRI) to the nano-scales (proteins, genes) are now being routinely acquired for disease characterization, one of the challenges in predicting patient outcome and treatment response has been in our inability to quantitatively fuse these disparate, heterogeneous data sources. At the Laboratory for Computational Imaging and Bioinformatics (LCIB)(1) at Rutgers University, our team has been developing computerized algorithms for high dimensional data and image analysis for predicting disease outcome from multiple modalities including MRI, digital pathology, and protein expression. Additionally, we have been developing novel data fusion algorithms based on non-linear dimensionality reduction methods (such as Graph Embedding) to quantitatively integrate information from multiple data sources and modalities with the overarching goal of optimizing meta-classifiers for making prognostic predictions. In this paper, we briefly describe 4 representative and ongoing CAP projects at LCIB. These projects include (1) an Image-based Risk Score (IbRiS) algorithm for predicting outcome of Estrogen receptor positive breast cancer patients based on quantitative image analysis of digitized breast cancer biopsy specimens alone, (2) segmenting and determining extent of lymphocytic infiltration (identified as a possible prognostic marker for outcome in human epidermal growth factor amplified breast cancers) from digitized histopathology, (3) distinguishing patients with different Gleason grades of prostate cancer (grade being known to be correlated to outcome) from digitized needle biopsy specimens, and (4) integrating protein expression measurements obtained from mass spectrometry with quantitative image features derived from digitized histopathology for distinguishing between prostate cancer patients at low and high risk of disease recurrence following radical prostatectomy." }, { "pmid": "20491597", "title": "Integrated diagnostics: a conceptual framework with examples.", "abstract": "With the advent of digital pathology, imaging scientists have begun to develop computerized image analysis algorithms for making diagnostic (disease presence), prognostic (outcome prediction), and theragnostic (choice of therapy) predictions from high resolution images of digitized histopathology. One of the caveats to developing image analysis algorithms for digitized histopathology is the ability to deal with highly dense, information rich datasets; datasets that would overwhelm most computer vision and image processing algorithms. Over the last decade, manifold learning and non-linear dimensionality reduction schemes have emerged as popular and powerful machine learning tools for pattern recognition problems. However, these techniques have thus far been applied primarily to classification and analysis of computer vision problems (e.g., face detection). In this paper, we discuss recent work by a few groups in the application of manifold learning methods to problems in computer aided diagnosis, prognosis, and theragnosis of digitized histopathology. In addition, we discuss some exciting recent developments in the application of these methods for multi-modal data fusion and classification; specifically the building of meta-classifiers by fusion of histological image and proteomic signatures for prostate cancer outcome prediction." }, { "pmid": "26208307", "title": "Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images.", "abstract": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of \"Deep Learning\" strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies." }, { "pmid": "22614727", "title": "Automated segmentation of the melanocytes in skin histopathological images.", "abstract": "In the diagnosis of skin melanoma by analyzing histopathological images, the detection of the melanocytes in the epidermis area is an important step. However, the detection of melanocytes in the epidermis area is dicult because other keratinocytes that are very similar to the melanocytes are also present. This paper proposes a novel computer-aided technique for segmentation of the melanocytes in the skin histopathological images. In order to reduce the local intensity variant, a mean-shift algorithm is applied for the initial segmentation of the image. A local region recursive segmentation algorithm is then proposed to filter out the candidate nuclei regions based on the domain prior knowledge. To distinguish the melanocytes from other keratinocytes in the epidermis area, a novel descriptor, named local double ellipse descriptor (LDED), is proposed to measure the local features of the candidate regions. The LDED uses two parameters: region ellipticity and local pattern characteristics to distinguish the melanocytes from the candidate nuclei regions. Experimental results on 28 dierent histopathological images of skin tissue with dierent zooming factors show that the proposed technique provides a superior performance." }, { "pmid": "25203987", "title": "Supervised multi-view canonical correlation analysis (sMVCCA): integrating histologic and proteomic features for predicting recurrent prostate cancer.", "abstract": "In this work, we present a new methodology to facilitate prediction of recurrent prostate cancer (CaP) following radical prostatectomy (RP) via the integration of quantitative image features and protein expression in the excised prostate. Creating a fused predictor from high-dimensional data streams is challenging because the classifier must 1) account for the \"curse of dimensionality\" problem, which hinders classifier performance when the number of features exceeds the number of patient studies and 2) balance potential mismatches in the number of features across different channels to avoid classifier bias towards channels with more features. Our new data integration methodology, supervised Multi-view Canonical Correlation Analysis (sMVCCA), aims to integrate infinite views of highdimensional data to provide more amenable data representations for disease classification. Additionally, we demonstrate sMVCCA using Spearman's rank correlation which, unlike Pearson's correlation, can account for nonlinear correlations and outliers. Forty CaP patients with pathological Gleason scores 6-8 were considered for this study. 21 of these men revealed biochemical recurrence (BCR) following RP, while 19 did not. For each patient, 189 quantitative histomorphometric attributes and 650 protein expression levels were extracted from the primary tumor nodule. The fused histomorphometric/proteomic representation via sMVCCA combined with a random forest classifier predicted BCR with a mean AUC of 0.74 and a maximum AUC of 0.9286. We found sMVCCA to perform statistically significantly (p < 0.05) better than comparative state-of-the-art data fusion strategies for predicting BCR. Furthermore, Kaplan-Meier analysis demonstrated improved BCR-free survival prediction for the sMVCCA-fused classifier as compared to histology or proteomic features alone." }, { "pmid": "22498689", "title": "An integrated region-, boundary-, shape-based active contour for multiple object overlap resolution in histological imagery.", "abstract": "Active contours and active shape models (ASM) have been widely employed in image segmentation. A major limitation of active contours, however, is in their 1) inability to resolve boundaries of intersecting objects and to 2) handle occlusion. Multiple overlapping objects are typically segmented out as a single object. On the other hand, ASMs are limited by point correspondence issues since object landmarks need to be identified across multiple objects for initial object alignment. ASMs are also are constrained in that they can usually only segment a single object in an image. In this paper, we present a novel synergistic boundary and region-based active contour model that incorporates shape priors in a level set formulation with automated initialization based on watershed. We demonstrate an application of these synergistic active contour models using multiple level sets to segment nuclear and glandular structures on digitized histopathology images of breast and prostate biopsy specimens. Unlike previous related approaches, our model is able to resolve object overlap and separate occluded boundaries of multiple objects simultaneously. The energy functional of the active contour is comprised of three terms. The first term is the prior shape term, modeled on the object of interest, thereby constraining the deformation achievable by the active contour. The second term, a boundary-based term detects object boundaries from image gradients. The third term drives the shape prior and the contour towards the object boundary based on region statistics. The results of qualitative and quantitative evaluation on 100 prostate and 14 breast cancer histology images for the task of detecting and segmenting nuclei and lymphocytes reveals that the model easily outperforms two state of the art segmentation schemes (geodesic active contour and Rousson shape-based model) and on average is able to resolve up to 91% of overlapping/occluded structures in the images." }, { "pmid": "21383925", "title": "Barriers and facilitators to adoption of soft copy interpretation from the user perspective: Lessons learned from filmless radiology for slideless pathology.", "abstract": "BACKGROUND\nAdoption of digital images for pathological specimens has been slower than adoption of digital images in radiology, despite a number of anticipated advantages for digital images in pathology. In this paper, we explore the factors that might explain this slower rate of adoption.\n\n\nMATERIALS AND METHOD\nSemi-structured interviews on barriers and facilitators to the adoption of digital images were conducted with two radiologists, three pathologists, and one pathologist's assistant.\n\n\nRESULTS\nBarriers and facilitators to adoption of digital images were reported in the areas of performance, workflow-efficiency, infrastructure, integration with other software, and exposure to digital images. The primary difference between the settings was that performance with the use of digital images as compared to the traditional method was perceived to be higher in radiology and lower in pathology. Additionally, exposure to digital images was higher in radiology than pathology, with some radiologists exclusively having been trained and/or practicing with digital images. The integration of digital images both improved and reduced efficiency in routine and non-routine workflow patterns in both settings, and was variable across the different organizations. A comparison of these findings with prior research on adoption of other health information technologies suggests that the barriers to adoption of digital images in pathology are relatively tractable.\n\n\nCONCLUSIONS\nImproving performance using digital images in pathology would likely accelerate adoption of innovative technologies that are facilitated by the use of digital images, such as electronic imaging databases, electronic health records, double reading for challenging cases, and computer-aided diagnostic systems." }, { "pmid": "20172780", "title": "Expectation-maximization-driven geodesic active contour with overlap resolution (EMaGACOR): application to lymphocyte segmentation on breast cancer histopathology.", "abstract": "The presence of lymphocytic infiltration (LI) has been correlated with nodal metastasis and tumor recurrence in HER2+ breast cancer (BC). The ability to automatically detect and quantify extent of LI on histopathology imagery could potentially result in the development of an image based prognostic tool for human epidermal growth factor receptor-2 (HER2+) BC patients. Lymphocyte segmentation in hematoxylin and eosin (H&E) stained BC histopathology images is complicated by the similarity in appearance between lymphocyte nuclei and other structures (e.g., cancer nuclei) in the image. Additional challenges include biological variability, histological artifacts, and high prevalence of overlapping objects. Although active contours are widely employed in image segmentation, they are limited in their ability to segment overlapping objects and are sensitive to initialization. In this paper, we present a new segmentation scheme, expectation-maximization (EM) driven geodesic active contour with overlap resolution (EMaGACOR), which we apply to automatically detecting and segmenting lymphocytes on HER2+ BC histopathology images. EMaGACOR utilizes the expectation-maximization algorithm for automatically initializing a geodesic active contour (GAC) and includes a novel scheme based on heuristic splitting of contours via identification of high concavity points for resolving overlapping structures. EMaGACOR was evaluated on a total of 100 HER2+ breast biopsy histology images and was found to have a detection sensitivity of over 86% and a positive predictive value of over 64%. By comparison, the EMaGAC model (without overlap resolution) and GAC model yielded corresponding detection sensitivities of 42% and 19%, respectively. Furthermore, EMaGACOR was able to correctly resolve over 90% of overlaps between intersecting lymphocytes. Hausdorff distance (HD) and mean absolute distance (MAD) for EMaGACOR were found to be 2.1 and 0.9 pixels, respectively, and significantly better compared to the corresponding performance of the EMaGAC and GAC models. EMaGACOR is an efficient, robust, reproducible, and accurate segmentation technique that could potentially be applied to other biomedical image analysis problems." }, { "pmid": "22167559", "title": "Robust segmentation of overlapping cells in histopathology specimens using parallel seed detection and repulsive level set.", "abstract": "Automated image analysis of histopathology specimens could potentially provide support for early detection and improved characterization of breast cancer. Automated segmentation of the cells comprising imaged tissue microarrays (TMAs) is a prerequisite for any subsequent quantitative analysis. Unfortunately, crowding and overlapping of cells present significant challenges for most traditional segmentation algorithms. In this paper, we propose a novel algorithm that can reliably separate touching cells in hematoxylin-stained breast TMA specimens that have been acquired using a standard RGB camera. The algorithm is composed of two steps. It begins with a fast, reliable object center localization approach that utilizes single-path voting followed by mean-shift clustering. Next, the contour of each cell is obtained using a level set algorithm based on an interactive model. We compared the experimental results with those reported in the most current literature. Finally, performance was evaluated by comparing the pixel-wise accuracy provided by human experts with that produced by the new automated segmentation algorithm. The method was systematically tested on 234 image patches exhibiting dense overlap and containing more than 2200 cells. It was also tested on whole slide images including blood smears and TMAs containing thousands of cells. Since the voting step of the seed detection algorithm is well suited for parallelization, a parallel version of the algorithm was implemented using graphic processing units (GPU) that resulted in significant speedup over the C/C++ implementation." }, { "pmid": "25192578", "title": "An efficient technique for nuclei segmentation based on ellipse descriptor analysis and improved seed detection algorithm.", "abstract": "In this paper, we propose an efficient method for segmenting cell nuclei in the skin histopathological images. The proposed technique consists of four modules. First, it separates the nuclei regions from the background with an adaptive threshold technique. Next, an elliptical descriptor is used to detect the isolated nuclei with elliptical shapes. This descriptor classifies the nuclei regions based on two ellipticity parameters. Nuclei clumps and nuclei with irregular shapes are then localized by an improved seed detection technique based on voting in the eroded nuclei regions. Finally, undivided nuclei regions are segmented by a marked watershed algorithm. Experimental results on 114 different image patches indicate that the proposed technique provides a superior performance in nuclei detection and segmentation." }, { "pmid": "24608059", "title": "Toward automatic mitotic cell detection and segmentation in multispectral histopathological images.", "abstract": "The count of mitotic cells is a critical factor in most cancer grading systems. Extracting the mitotic cell from the histopathological image is a very challenging task. In this paper, we propose an efficient technique for detecting and segmenting the mitotic cells in the high-resolution multispectral image. The proposed technique consists of three main modules: discriminative image generation, mitotic cell candidate detection and segmentation, and mitotic cell candidate classification. In the first module, a discriminative image is obtained by linear discriminant analysis using ten different spectral band images. A set of mitotic cell candidate regions is then detected and segmented by the Bayesian modeling and local-region threshold method. In the third module, a 226 dimension feature is extracted from the mitotic cell candidates and their surrounding regions. An imbalanced classification framework is then applied to perform the classification for the mitotic cell candidates in order to detect the real mitotic cells. The proposed technique has been evaluated on a publicly available dataset of 35 × 10 multispectral images, in which 224 mitotic cells are manually labeled by experts. The proposed technique is able to provide superior performance compared to the existing technique, 81.5% sensitivity rate and 33.9% precision rate in terms of detection performance, and 89.3% sensitivity rate and 87.5% precision rate in terms of segmentation performance." }, { "pmid": "23221815", "title": "Invariant delineation of nuclear architecture in glioblastoma multiforme for clinical and molecular association.", "abstract": "Automated analysis of whole mount tissue sections can provide insights into tumor subtypes and the underlying molecular basis of neoplasm. However, since tumor sections are collected from different laboratories, inherent technical and biological variations impede analysis for very large datasets such as The Cancer Genome Atlas (TCGA). Our objective is to characterize tumor histopathology, through the delineation of the nuclear regions, from hematoxylin and eosin (H&E) stained tissue sections. Such a representation can then be mined for intrinsic subtypes across a large dataset for prediction and molecular association. Furthermore, nuclear segmentation is formulated within a multi-reference graph framework with geodesic constraints, which enables computation of multidimensional representations, on a cell-by-cell basis, for functional enrichment and bioinformatics analysis. Here, we present a novel method, multi-reference graph cut (MRGC), for nuclear segmentation that overcomes technical variations associated with sample preparation by incorporating prior knowledge from manually annotated reference images and local image features. The proposed approach has been validated on manually annotated samples and then applied to a dataset of 377 Glioblastoma Multiforme (GBM) whole slide images from 146 patients. For the GBM cohort, multidimensional representation of the nuclear features and their organization have identified 1) statistically significant subtypes based on several morphometric indexes, 2) whether each subtype can be predictive or not, and 3) that the molecular correlates of predictive subtypes are consistent with the literature. Data and intermediaries for a number of tumor types (GBM, low grade glial, and kidney renal clear carcinoma) are available at: http://tcga.lbl.gov for correlation with TCGA molecular data. The website also provides an interface for panning and zooming of whole mount tissue sections with/without overlaid segmentation results for quality control." }, { "pmid": "20359767", "title": "Automated segmentation of tissue images for computerized IHC analysis.", "abstract": "This paper presents two automated methods for the segmentation of immunohistochemical tissue images that overcome the limitations of the manual approach as well as of the existing computerized techniques. The first independent method, based on unsupervised color clustering, recognizes automatically the target cancerous areas in the specimen and disregards the stroma; the second method, based on colors separation and morphological processing, exploits automated segmentation of the nuclear membranes of the cancerous cells. Extensive experimental results on real tissue images demonstrate the accuracy of our techniques compared to manual segmentations; additional experiments show that our techniques are more effective in immunohistochemical images than popular approaches based on supervised learning or active contours. The proposed procedure can be exploited for any applications that require tissues and cells exploration and to perform reliable and standardized measures of the activity of specific proteins involved in multi-factorial genetic pathologies." }, { "pmid": "19884070", "title": "Improved automatic detection and segmentation of cell nuclei in histopathology images.", "abstract": "Automatic segmentation of cell nuclei is an essential step in image cytometry and histometry. Despite substantial progress, there is a need to improve accuracy, speed, level of automation, and adaptability to new applications. This paper presents a robust and accurate novel method for segmenting cell nuclei using a combination of ideas. The image foreground is extracted automatically using a graph-cuts-based binarization. Next, nuclear seed points are detected by a novel method combining multiscale Laplacian-of-Gaussian filtering constrained by distance-map-based adaptive scale selection. These points are used to perform an initial segmentation that is refined using a second graph-cuts-based algorithm incorporating the method of alpha expansions and graph coloring to reduce computational complexity. Nuclear segmentation results were manually validated over 25 representative images (15 in vitro images and 10 in vivo images, containing more than 7400 nuclei) drawn from diverse cancer histopathology studies, and four types of segmentation errors were investigated. The overall accuracy of the proposed segmentation algorithm exceeded 86%. The accuracy was found to exceed 94% when only over- and undersegmentation errors were considered. The confounding image characteristics that led to most detection/segmentation errors were high cell density, high degree of clustering, poor image contrast and noisy background, damaged/irregular nuclei, and poor edge information. We present an efficient semiautomated approach to editing automated segmentation results that requires two mouse clicks per operation." }, { "pmid": "20656653", "title": "Segmenting clustered nuclei using H-minima transform-based marker extraction and contour parameterization.", "abstract": "In this letter, we present a novel watershed-based method for segmentation of cervical and breast cell images. We formulate the segmentation of clustered nuclei as an optimization problem. A hypothesis concerning the nuclei, which involves a priori knowledge with respect to the shape of nuclei, is tested to solve the optimization problem. We first apply the distance transform to the clustered nuclei. A marker extraction scheme based on the H-minima transform is introduced to obtain the optimal segmentation result from the distance map. In order to estimate the optimal h-value, a size-invariant segmentation distortion evaluation function is defined based on the fitting residuals between the segmented region boundaries and fitted models. Ellipsoidal modeling of contours is introduced to adjust nuclei contours for more effective analysis. Experiments on a variety of real microscopic cell images show that the proposed method yields more accurate segmentation results than the state-of-the-art watershed-based methods." }, { "pmid": "21947866", "title": "Machine vision-based localization of nucleic and cytoplasmic injection sites on low-contrast adherent cells.", "abstract": "Automated robotic bio-micromanipulation can improve the throughput and efficiency of single-cell experiments. Adherent cells, such as fibroblasts, include a wide range of mammalian cells and are usually very thin with highly irregular morphologies. Automated micromanipulation of these cells is a beneficial yet challenging task, where the machine vision sub-task is addressed in this article. The necessary but neglected problem of localizing injection sites on the nucleus and the cytoplasm is defined and a novel two-stage model-based algorithm is proposed. In Stage I, the gradient information associated with the nucleic regions is extracted and used in a mathematical morphology clustering framework to roughly localize the nucleus. Next, this preliminary segmentation information is used to estimate an ellipsoidal model for the nucleic region, which is then used as an attention window in a k-means clustering-based iterative search algorithm for fine localization of the nucleus and nucleic injection site (NIS). In Stage II, a geometrical model is built on each localized nucleus and employed in a new texture-based region-growing technique called Growing Circles Algorithm to localize the cytoplasmic injection site (CIS). The proposed algorithm has been tested on 405 images containing more than 1,000 NIH/3T3 fibroblast cells, and yielded the precision rates of 0.918, 0.943, and 0.866 for the NIS, CIS, and combined NIS-CIS localizations, respectively." }, { "pmid": "23912498", "title": "Computer-Aided Breast Cancer Diagnosis Based on the Analysis of Cytological Images of Fine Needle Biopsies.", "abstract": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information." }, { "pmid": "18288618", "title": "An automated method for cell detection in zebrafish.", "abstract": "Quantification of cells is a critical step towards the assessment of cell fate in neurological disease or developmental models. Here, we present a novel cell detection method for the automatic quantification of zebrafish neuronal cells, including primary motor neurons, Rohon-Beard neurons, and retinal cells. Our method consists of four steps. First, a diffused gradient vector field is produced. Subsequently, the orientations and magnitude information of diffused gradients are accumulated, and a response image is computed. In the third step, we perform non-maximum suppression on the response image and identify the detection candidates. In the fourth and final step the detected objects are grouped into clusters based on their color information. Using five different datasets depicting zebrafish cells, we show that our method consistently displays high sensitivity and specificity of over 95%. Our results demonstrate the general applicability of this method to different data samples, including nuclear staining, immunohistochemistry, and cell death detection." }, { "pmid": "24557687", "title": "Automatic Ki-67 counting using robust cell detection and online dictionary learning.", "abstract": "Ki-67 proliferation index is a valid and important biomarker to gauge neuroendocrine tumor (NET) cell progression within the gastrointestinal tract and pancreas. Automatic Ki-67 assessment is very challenging due to complex variations of cell characteristics. In this paper, we propose an integrated learning-based framework for accurate automatic Ki-67 counting for NET. The main contributions of our method are: 1) A robust cell counting and boundary delineation algorithm that is designed to localize both tumor and nontumor cells. 2) A novel online sparse dictionary learning method to select a set of representative training samples. 3) An automated framework that is used to differentiate tumor from nontumor cells (such as lymphocytes) and immunopositive from immunonegative tumor cells for the assessment of Ki-67 proliferation index. The proposed method has been extensively tested using 46 NET cases. The performance is compared with pathologists' manual annotations. The automatic Ki-67 counting is quite accurate compared with pathologists' manual annotations. This is much more accurate than existing methods." }, { "pmid": "25958195", "title": "Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis.", "abstract": "Color deconvolution has emerged as a popular method for color unmixing as a pre-processing step for image analysis of digital pathology images. One deficiency of this approach is that the stain matrix is pre-defined which requires specific knowledge of the data. This paper presents an unsupervised Sparse Non-negative Matrix Factorization (SNMF) based approach for color unmixing. We evaluate this approach for color unmixing of breast pathology images. Compared to Non-negative Matrix Factorization (NMF), the sparseness constraint imposed on coefficient matrix aims to use more meaningful representation of color components for separating stained colors. In this work SNMF is leveraged for decomposing pure stained color in both Immunohistochemistry (IHC) and Hematoxylin and Eosin (H&E) images. SNMF is compared with Principle Component Analysis (PCA), Independent Component Analysis (ICA), Color Deconvolution (CD), and Non-negative Matrix Factorization (NMF) based approaches. SNMF demonstrated improved performance in decomposing brown diaminobenzidine (DAB) component from 36 IHC images as well as accurately segmenting about 1400 nuclei and 500 lymphocytes from H & E images." }, { "pmid": "21869365", "title": "A computational approach to edge detection.", "abstract": "This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge." }, { "pmid": "24704158", "title": "Automated quantification of MART1-verified Ki-67 indices: useful diagnostic aid in melanocytic lesions.", "abstract": "The MART1-verified Ki-67 proliferation index is a valuable aid to distinguish melanomas from nevi. Because such indices are quantifiable by image analysis, they may provide a novel automated diagnostic aid. This study aimed to validate the diagnostic performance of automated dermal Ki-67 indices and to explore the diagnostic capability of epidermal Ki-67 in lesions both with and without a dermal component. In addition, we investigated the automated indices' ability to predict sentinel lymph node (SLN) status. Paraffin-embedded tissues from 84 primary cutaneous melanomas (35 with SLN biopsy), 22 melanoma in situ, and 270 nevi were included consecutively. Whole slide images were captured from Ki-67/MART1 double stains, and image analysis computed Ki-67 indices for epidermis and dermis. In lesions with a dermal component, the area under the receiver operating characteristic (ROC) curve was 0.79 (95% confidence interval [CI], 0.72-0.86) for dermal indices. By excluding lesions with few melanocytic cells, this area increased to 0.93 (95% CI, 0.88-0.98). A simultaneous analysis of epidermis and dermis yielded an ROC area of 0.94 (95% CI, 0.91-0.96) for lesions with a dermal component and 0.98 (95% CI, 0.97-1.0) for lesions with a considerable dermal component. For all lesions, the ROC area of the simultaneous analysis was 0.89 (95% CI, 0.85-0.92). SLN-positive patients generally had a higher index than SLN-negative patients (P ≤ .003). Conclusively, an automated diagnostic aid seems feasible in melanocytic pathology. The dermal Ki-67 index was inferior to a combined epidermal and dermal index in diagnosis but valuable for predicting the SLN status of our melanoma patients." } ]
Scientific Reports
27703256
PMC5050509
10.1038/srep34759
Feature Subset Selection for Cancer Classification Using Weight Local Modularity
"Microarray is recently becoming an important tool for profiling the global gene expression patterns(...TRUNCATED)
"Related WorkOwing to the importance of gene selection in the analysis of the microarray dataset and(...TRUNCATED)
["23124059","17720704","16790051","11435405","15327980","11435405","22149632","15680584","16119262",(...TRUNCATED)
[{"pmid":"23124059","title":"Selection of interdependent genes via dynamic relevance analysis for ca(...TRUNCATED)
Frontiers in Neuroscience
27774048
PMC5054006
10.3389/fnins.2016.00454
Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection
"Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer in(...TRUNCATED)
"2. Related workThis section presents the most relevant studies related to the scope of this paper. (...TRUNCATED)
[ "23486216", "23594762", "22589242", "20582271", "8361834", "16933428" ]
[{"pmid":"23486216","title":"Enhanced perception of user intention by combining EEG and gaze-trackin(...TRUNCATED)
JMIR Medical Informatics
27658571
PMC5054236
10.2196/medinform.5353
Characterizing the (Perceived) Newsworthiness of Health Science Articles: A Data-Driven Approach
"BackgroundHealth science findings are primarily disseminated through manuscript publications. Infor(...TRUNCATED)
"Motivation and Related WorkThe news media are powerful conduits by which to disseminate important i(...TRUNCATED)
[ "15249264", "16641081", "19051112", "22546317", "15253997", "12038933", "25498121" ]
[{"pmid":"15249264","title":"Health attitudes, health cognitions, and health behaviors among Interne(...TRUNCATED)
BioData Mining
27777627
PMC5057496
10.1186/s13040-016-0110-8
"FEDRR: fast, exhaustive detection of redundant hierarchical relations for quality improvement of la(...TRUNCATED)
"BackgroundRedundant hierarchical relations refer to such patterns as two paths from one concept to (...TRUNCATED)
"Related workThere has been related work on exploring redundant relations in biomedical ontologies o(...TRUNCATED)
[ "17095826", "26306232", "22580476", "18952949", "16929044", "19475727", "25991129", "23911553" ]
[{"pmid":"17095826","title":"SNOMED-CT: The advanced terminology and coding system for eHealth.","ab(...TRUNCATED)
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card