Koller, Daphne
Overview
Works:  88 works in 122 publications in 1 language and 824 library holdings 

Genres:  Educational films Internet videos Conference papers and proceedings 
Roles:  Author, Thesis advisor 
Classifications:  QA279.5, 519.5420285 
Publication Timeline
.
Most widely held works about
Daphne Koller
Most widely held works by
Daphne Koller
Probabilistic graphical models : principles and techniques by
Daphne Koller(
Book
)
15 editions published between 2009 and 2012 in English and held by 464 WorldCat member libraries worldwide
Proceedings of the annual Conference on Uncertainty in Artificial Intelligence, available for 1991present. Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for exchanging results on the use of principled uncertainreasoning methods in intelligent systems. The UAI Proceedings have become a basic reference for researches and practitioners who want to know about both theoretical advances and the latest applied developments in the field
15 editions published between 2009 and 2012 in English and held by 464 WorldCat member libraries worldwide
Proceedings of the annual Conference on Uncertainty in Artificial Intelligence, available for 1991present. Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for exchanging results on the use of principled uncertainreasoning methods in intelligent systems. The UAI Proceedings have become a basic reference for researches and practitioners who want to know about both theoretical advances and the latest applied developments in the field
TEDTalks : Daphne Koller  What We're Learning from Online Education(
Visual
)
1 edition published in 2012 in English and held by 154 WorldCat member libraries worldwide
Educator Daphne Koller is enticing top universities to put their most intriguing courses online for free  not just as a service, but as a way to research how people learn. In this TEDTalk, Koller explains how Coursera, a social entrepreneurship company cofounded with Andrew Ng, tracks each keystroke, quiz, peertopeer discussion, and selfgraded assignment to build an unprecedented pool of data on how knowledge is processed
1 edition published in 2012 in English and held by 154 WorldCat member libraries worldwide
Educator Daphne Koller is enticing top universities to put their most intriguing courses online for free  not just as a service, but as a way to research how people learn. In this TEDTalk, Koller explains how Coursera, a social entrepreneurship company cofounded with Andrew Ng, tracks each keystroke, quiz, peertopeer discussion, and selfgraded assignment to build an unprecedented pool of data on how knowledge is processed
Uncertainty in artificial intelligence : proceedings of the seventeenth conference (2001), August 25, 2001, University of
Washington, Seattle, Washington by
Jack Breese(
Book
)
5 editions published in 2001 in English and held by 37 WorldCat member libraries worldwide
5 editions published in 2001 in English and held by 37 WorldCat member libraries worldwide
From knowledge to belief by
Daphne Koller(
Book
)
7 editions published between 1993 and 1994 in English and held by 11 WorldCat member libraries worldwide
We use techniques from finite model theory to analyze the computational aspects of random worlds. The problem of computing degrees of belief is undecidable in general. However, for unary knowledge bases, a tight connection to the principle of maximum entropy often allows us to compute degrees of belief
7 editions published between 1993 and 1994 in English and held by 11 WorldCat member libraries worldwide
We use techniques from finite model theory to analyze the computational aspects of random worlds. The problem of computing degrees of belief is undecidable in general. However, for unary knowledge bases, a tight connection to the principle of maximum entropy often allows us to compute degrees of belief
Adaptive probabilistic networks by
Stuart J Russell(
Book
)
2 editions published in 1994 in English and held by 7 WorldCat member libraries worldwide
2 editions published in 1994 in English and held by 7 WorldCat member libraries worldwide
Representation dependence in probabilistic inference by
Joseph Y Halpern(
Book
)
2 editions published in 1995 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. This is generally viewed as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any probabilistic inference system that sanctions certain important patterns of reasoning, such as minimal default assumption of independence, must suffer from representation dependence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation dependence and other desiderata."
2 editions published in 1995 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. This is generally viewed as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any probabilistic inference system that sanctions certain important patterns of reasoning, such as minimal default assumption of independence, must suffer from representation dependence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation dependence and other desiderata."
Asymptotic conditional probabilities : the unary case by
A. J Grove(
Book
)
4 editions published in 1993 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences [symbol] and [theta], we consider the structures with domain [1 ..., N] that satisfy [theta], and compute the fraction of them in which [symbol] is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown in [Lio69, GHK93], in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that [theta] can use a nonunary predicate symbol. Liogon'kiĭ [Lio69] shows that if we condition on formulas [theta] involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on [symbol]. We extend this result here to the case where [theta] involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem."
4 editions published in 1993 in English and held by 6 WorldCat member libraries worldwide
Abstract: "Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences [symbol] and [theta], we consider the structures with domain [1 ..., N] that satisfy [theta], and compute the fraction of them in which [symbol] is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown in [Lio69, GHK93], in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that [theta] can use a nonunary predicate symbol. Liogon'kiĭ [Lio69] shows that if we condition on formulas [theta] involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on [symbol]. We extend this result here to the case where [theta] involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem."
Random worlds and maximum entropy by
A. J Grove(
Book
)
2 editions published in 1994 in English and held by 3 WorldCat member libraries worldwide
Abstract: "Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the random worlds method, for computing a degree of belief that some formula [symbol] holds given KB. If the domain has size N, then we can consider all possible worlds, or firstorder models, with domain [1 ..., N] that satisfy KB, and compute the fraction of them in which [symbol] is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying [symbol] and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics (e.g., [Jay78]) and artificial intelligence (e.g., [PV89, Sha89]), but is far more general. Of equal interest to the actual results themselves are the numerous subtle issues we must address when formulating it. For languages with binary predicate symbols, the randomworlds method continues to make sense, but there no longer seems to be any useful connection to maximum entropy. It is difficult to see how maximum entropy can be applied at all. In fact, results from [GHK93a] show that even generalizations of maximum entropy are unlikely to be useful. These observations suggest unexpected limitations to the applicability of maximum entropy methods."
2 editions published in 1994 in English and held by 3 WorldCat member libraries worldwide
Abstract: "Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the random worlds method, for computing a degree of belief that some formula [symbol] holds given KB. If the domain has size N, then we can consider all possible worlds, or firstorder models, with domain [1 ..., N] that satisfy KB, and compute the fraction of them in which [symbol] is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying [symbol] and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics (e.g., [Jay78]) and artificial intelligence (e.g., [PV89, Sha89]), but is far more general. Of equal interest to the actual results themselves are the numerous subtle issues we must address when formulating it. For languages with binary predicate symbols, the randomworlds method continues to make sense, but there no longer seems to be any useful connection to maximum entropy. It is difficult to see how maximum entropy can be applied at all. In fact, results from [GHK93a] show that even generalizations of maximum entropy are unlikely to be useful. These observations suggest unexpected limitations to the applicability of maximum entropy methods."
On the complexity of twoperson zerosum games in extensive form by
Daphne Koller(
Book
)
2 editions published in 1990 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 1990 in English and held by 3 WorldCat member libraries worldwide
Alignment of cryoelectron tomography images using Markov Random Fields by Fernando Amat Gil(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
CryoElectron tomography (CET) is the only imaging technology capable of visualizing the 3D organization of intact bacterial whole cells at nanometer resolution in situ. However, quantitative image analysis of CET datasets is extremely challenging due to very low signal to noise ratio (well below 0dB), missing data and heterogeneity of biological structures. In this thesis, we present a probabilistic framework to align CET images in order to improve resolution and create structural models of different biological structures. The alignment problem of 2D and 3D CET images is cast as a Markov Random Field (MRF), where each node in the graph represents a landmark in the image. We connect pairs of nodes based on local spatial correlations and we find the "best'' correspondence between the two graphs. In this correspondence problem, the "best'' solution maximizes the probability score in the MRF. This probability is the product of singleton potentials that measure image similarity between nodes and the pairwise potentials that measure deformations between edges. Wellknown approximate inference algorithms such as Loopy Belief Propagation (LBP) are used to obtain the "best'' solution. We present results in two specific applications: automatic alignment of tilt series using fiducial markers and subtomogram alignment. In the first case we present RAPTOR, which is being used in several labs to enable real highthroughput tomography. In the second case our approach is able to reach the contrast transfer function limit in low SNR samples from whole cells as well as revealing atomic resolution details invisible to the naked eye through nanogold labeling
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
CryoElectron tomography (CET) is the only imaging technology capable of visualizing the 3D organization of intact bacterial whole cells at nanometer resolution in situ. However, quantitative image analysis of CET datasets is extremely challenging due to very low signal to noise ratio (well below 0dB), missing data and heterogeneity of biological structures. In this thesis, we present a probabilistic framework to align CET images in order to improve resolution and create structural models of different biological structures. The alignment problem of 2D and 3D CET images is cast as a Markov Random Field (MRF), where each node in the graph represents a landmark in the image. We connect pairs of nodes based on local spatial correlations and we find the "best'' correspondence between the two graphs. In this correspondence problem, the "best'' solution maximizes the probability score in the MRF. This probability is the product of singleton potentials that measure image similarity between nodes and the pairwise potentials that measure deformations between edges. Wellknown approximate inference algorithms such as Loopy Belief Propagation (LBP) are used to obtain the "best'' solution. We present results in two specific applications: automatic alignment of tilt series using fiducial markers and subtomogram alignment. In the first case we present RAPTOR, which is being used in several labs to enable real highthroughput tomography. In the second case our approach is able to reach the contrast transfer function limit in low SNR samples from whole cells as well as revealing atomic resolution details invisible to the naked eye through nanogold labeling
Efficient computation of equilibria for extensive twoperson games by
Daphne Koller(
Book
)
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide
Abstract: "The Nash equilibria of a twoperson, nonzerosum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, it is first necessary to convert the game to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. Hence, finding equilibria of extensive games typically implies exponential blowup in terms of both time and space. In this paper we suggest an alternative approach, based on the sequence form of the game. For a game with perfect recall, the sequence form is a linear sized strategic description, which results in an LCP of linear size. For this LCP, we show that an equilibrium can be found efficiently by Lemke's algorithm, a generalization of the LemkeHowson method."
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide
Abstract: "The Nash equilibria of a twoperson, nonzerosum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, it is first necessary to convert the game to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. Hence, finding equilibria of extensive games typically implies exponential blowup in terms of both time and space. In this paper we suggest an alternative approach, based on the sequence form of the game. For a game with perfect recall, the sequence form is a linear sized strategic description, which results in an LCP of linear size. For this LCP, we show that an equilibrium can be found efficiently by Lemke's algorithm, a generalization of the LemkeHowson method."
The digital patient machine learning techniques for analyzing electronic health record data by Suchi Saria(
)
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
The current unprecedented rate of digitization of longitudinal health data  continuous device monitoring data, laboratory measurements, medication orders, treatment reports, reports of physician assessments  allows visibility into patient health at increasing levels of detail. A clearer lens into this data could help improve decision making both for individual physicians on the front lines of care, and for policy makers setting national direction. However, this type of data is highdimensional (an infant with no prior clinical history can have more than 1000 different measurements in the ICU), highly unstructured (the measurements occur irregularly, and different numbers and types of measurements are taken for different patients) and heterogeneous (from ultrasound assessments to lab tests to continuous monitor data). Furthermore, the data is often sparse, systematically not present, and the underlying system is nonstationary. Extracting the full value of the existing data requires novel approaches. In this thesis, we develop novel methods to show how longitudinal health data contained in Electronic Health Records (EHRs) can be harnessed for making novel clinical discoveries. For this, one requires access to patient outcome data  which patient has which complications. We present a method for automated extraction of patient outcomes from EHR data; our method shows how natural languages cues from the physicians notes can be combined with clinical events that occur during a patient's length of stay in the hospital to extract significantly higher quality annotations than previous stateoftheart systems. We develop novel methods for exploratory analysis and structure discovery in bedside monitor data. This data forms the bulk of the data collected on any patient yet, it is not utilized in any substantive way post collection. We present methods to discover recurring shape and dynamic signatures in this data. While we primarily focus on clinical time series, our methods also generalize to other continuousvalued time series data. Our analysis of the bedside monitor data led us to a novel use of this data for risk prediction in infants. Using features automatically extracted from physiologic signals collected in the first 3 hours of life, we develop Physiscore, a tool that predicts infants at risk for major complications downstream. Physiscore is both fully automated and significantly more accurate than the current standard of care. It can be used for resource optimization within a NICU, managing infant transport to a higher level of care and parental counseling. Overall, this thesis illustrates how the use of machine learning for analyzing these large scale digital patient data repositories can yield new clinical discoveries and potentially useful tools for improving patient care
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
The current unprecedented rate of digitization of longitudinal health data  continuous device monitoring data, laboratory measurements, medication orders, treatment reports, reports of physician assessments  allows visibility into patient health at increasing levels of detail. A clearer lens into this data could help improve decision making both for individual physicians on the front lines of care, and for policy makers setting national direction. However, this type of data is highdimensional (an infant with no prior clinical history can have more than 1000 different measurements in the ICU), highly unstructured (the measurements occur irregularly, and different numbers and types of measurements are taken for different patients) and heterogeneous (from ultrasound assessments to lab tests to continuous monitor data). Furthermore, the data is often sparse, systematically not present, and the underlying system is nonstationary. Extracting the full value of the existing data requires novel approaches. In this thesis, we develop novel methods to show how longitudinal health data contained in Electronic Health Records (EHRs) can be harnessed for making novel clinical discoveries. For this, one requires access to patient outcome data  which patient has which complications. We present a method for automated extraction of patient outcomes from EHR data; our method shows how natural languages cues from the physicians notes can be combined with clinical events that occur during a patient's length of stay in the hospital to extract significantly higher quality annotations than previous stateoftheart systems. We develop novel methods for exploratory analysis and structure discovery in bedside monitor data. This data forms the bulk of the data collected on any patient yet, it is not utilized in any substantive way post collection. We present methods to discover recurring shape and dynamic signatures in this data. While we primarily focus on clinical time series, our methods also generalize to other continuousvalued time series data. Our analysis of the bedside monitor data led us to a novel use of this data for risk prediction in infants. Using features automatically extracted from physiologic signals collected in the first 3 hours of life, we develop Physiscore, a tool that predicts infants at risk for major complications downstream. Physiscore is both fully automated and significantly more accurate than the current standard of care. It can be used for resource optimization within a NICU, managing infant transport to a higher level of care and parental counseling. Overall, this thesis illustrates how the use of machine learning for analyzing these large scale digital patient data repositories can yield new clinical discoveries and potentially useful tools for improving patient care
From statistical knowledge bases to degrees of belief by
Thomas J. Watson IBM Research Center(
Book
)
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
Making Rational Decisions Using Adaptive Utility Elicitation(
)
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
Representations and solutions for gametheoretic problems by
Daphne Koller(
)
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
Restricted Bayes Optimal Classifiers(
)
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
Probabilistic models for regionbased scene understanding by Stephen Gould(
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
One of the longterm goals of computer vision is to be able to understand the world through visual images. This daunting task involves reasoning simultaneously about objects, regions and 3D geometry. Traditionally, computer vision research has tackled these tasks is isolation: independent detectors for finding objects, image segmentation algorithms for defining regions, and specialized monocular depth perception methods for reconstructing geometry. Unfortunately, this isolated reasoning can lead to inconsistent interpretations of the scene. In this thesis we develop a unified probabilistic model that avoids these inconsistencies. We introduce a regionbased representation of the scene in which pixels are grouped together to form consistent regions. Each region is then annotated with a semantic and geometric class label. Next, we extend our representation to include the concept of objects, which can be comprised of multiple regions. Finally, we show how our regionbased representation can be used to interpret the 3D structure of the scene. Importantly, we model the scene using a coherent probabilistic model over random variables defined by our regionbased representation. This enforces consistency between tasks and allows contextual dependencies to be modeled across tasks, e.g., that sky should be above the horizon, and ground below it. Finally, we present an efficient algorithm for performing inference in our model, and demonstrate stateoftheart results on a number of standard tasks
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
One of the longterm goals of computer vision is to be able to understand the world through visual images. This daunting task involves reasoning simultaneously about objects, regions and 3D geometry. Traditionally, computer vision research has tackled these tasks is isolation: independent detectors for finding objects, image segmentation algorithms for defining regions, and specialized monocular depth perception methods for reconstructing geometry. Unfortunately, this isolated reasoning can lead to inconsistent interpretations of the scene. In this thesis we develop a unified probabilistic model that avoids these inconsistencies. We introduce a regionbased representation of the scene in which pixels are grouped together to form consistent regions. Each region is then annotated with a semantic and geometric class label. Next, we extend our representation to include the concept of objects, which can be comprised of multiple regions. Finally, we show how our regionbased representation can be used to interpret the 3D structure of the scene. Importantly, we model the scene using a coherent probabilistic model over random variables defined by our regionbased representation. This enforces consistency between tasks and allows contextual dependencies to be modeled across tasks, e.g., that sky should be above the horizon, and ground below it. Finally, we present an efficient algorithm for performing inference in our model, and demonstrate stateoftheart results on a number of standard tasks
Sensors & Symbols: An Integrated Framework(
Book
)
2 editions published in 1999 in English and held by 1 WorldCat member library worldwide
The goal of this effort was to provide a unified probabilistic framework that integrates symbolic and sensory reasoning. Such a framework would allow sensor data to be analyzed in terms of highlevel symbolic models. It will also allow the results of highlevel analysis to guide the lowlevel sensor interpretation task and to help in resolving ambiguities in the sensor data. Our approach was based on the framework of probabilistic graphical models, which allows us to build systems that learn and reason with complex models, encompassing both lowlevel continuous sensor data and highlevel symbolic concepts. Over the five years of the project, we explored two main thrusts: Inference and learning in hybrid and temporal Bayesian networks Mapping and modeling of 3D physical environments. Our progress on each of these two directions is detailed in the attached report
2 editions published in 1999 in English and held by 1 WorldCat member library worldwide
The goal of this effort was to provide a unified probabilistic framework that integrates symbolic and sensory reasoning. Such a framework would allow sensor data to be analyzed in terms of highlevel symbolic models. It will also allow the results of highlevel analysis to guide the lowlevel sensor interpretation task and to help in resolving ambiguities in the sensor data. Our approach was based on the framework of probabilistic graphical models, which allows us to build systems that learn and reason with complex models, encompassing both lowlevel continuous sensor data and highlevel symbolic concepts. Over the five years of the project, we explored two main thrusts: Inference and learning in hybrid and temporal Bayesian networks Mapping and modeling of 3D physical environments. Our progress on each of these two directions is detailed in the attached report
Knowledge Representation for an Uncertain World(
Book
)
2 editions published in 1997 in English and held by 1 WorldCat member library worldwide
Any application where an intelligent agent interacts with the real world must deal with the problem of uncertainty. Bayesian belief networks have become dominant in addressing this issue. This is a framework based on principled probabilistic semantics, which achieves effective knowledge representation and inference capabilities by utilizing the locality structure in the domain: typically, only very few aspects of the situation directly affect each other. Despite their success, belief networks are inadequate as a knowledge representation language for large, complex domains: Their attributebased nature does not allow us to express general rules that hold in many different circumstances. This prevents knowledge from being shared among applications; the initial knowledge acquisition cost has to be paid for each new domain. It also inhibits the construction of large complex networks. We deal with this issue by presenting a rich knowledgerepresentation language from which belief networks can be constructed to suit specific circumstances, algorithms for learning the network parameters from data, fast approximate inference algorithms designed to deal with the large networks that result. We show how these techniques can be applied in domains involving continuous variables, in situations where the world changes over time, and in the context of planing under uncertainty
2 editions published in 1997 in English and held by 1 WorldCat member library worldwide
Any application where an intelligent agent interacts with the real world must deal with the problem of uncertainty. Bayesian belief networks have become dominant in addressing this issue. This is a framework based on principled probabilistic semantics, which achieves effective knowledge representation and inference capabilities by utilizing the locality structure in the domain: typically, only very few aspects of the situation directly affect each other. Despite their success, belief networks are inadequate as a knowledge representation language for large, complex domains: Their attributebased nature does not allow us to express general rules that hold in many different circumstances. This prevents knowledge from being shared among applications; the initial knowledge acquisition cost has to be paid for each new domain. It also inhibits the construction of large complex networks. We deal with this issue by presenting a rich knowledgerepresentation language from which belief networks can be constructed to suit specific circumstances, algorithms for learning the network parameters from data, fast approximate inference algorithms designed to deal with the large networks that result. We show how these techniques can be applied in domains involving continuous variables, in situations where the world changes over time, and in the context of planing under uncertainty
Statistical analyses on highthoughput sequencing of B cell and T cell receptors by Yi Liu(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
The immune system is typically described as having two major branches: the adaptive branch, and the innate branch. In the adaptive immune system, T cells and B cells learn to discern foreign pathogens from self. B cell repertoires mediate the host's ability to mount appropriately pathogenspecific humoral responses against immunological challenges. VDJ somatic recombination of the immunoglobulin chains, affinity maturation, and B cell clonal selection all contribute to the generation of a healthy B cell repertoire. Despite starting from only a limited number of genomic segments, these processes lead to diverse and optimized repertoires of clones, where each clone has its own distinct binding specificity for foreign antigens. High depth sequencing data on the repertoire of B cell receptors has enabled us to examine the diverse receptors in great detail. My thesis will showcase three aspects of the B cell repertoire in terms of how they relate to ageing and/or pathology: clonal convergence, clonal expansion, and phylogenetic hypermutation. In a separate but related work, we apply a novel combination of sequencing method and statistical techniques to examine T cell receptors. We raised the lower bounds on T cell receptor repertoires beta chain richness in healthy humans, and examined how T cell receptor diversity relates to age
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
The immune system is typically described as having two major branches: the adaptive branch, and the innate branch. In the adaptive immune system, T cells and B cells learn to discern foreign pathogens from self. B cell repertoires mediate the host's ability to mount appropriately pathogenspecific humoral responses against immunological challenges. VDJ somatic recombination of the immunoglobulin chains, affinity maturation, and B cell clonal selection all contribute to the generation of a healthy B cell repertoire. Despite starting from only a limited number of genomic segments, these processes lead to diverse and optimized repertoires of clones, where each clone has its own distinct binding specificity for foreign antigens. High depth sequencing data on the repertoire of B cell receptors has enabled us to examine the diverse receptors in great detail. My thesis will showcase three aspects of the B cell repertoire in terms of how they relate to ageing and/or pathology: clonal convergence, clonal expansion, and phylogenetic hypermutation. In a separate but related work, we apply a novel combination of sequencing method and statistical techniques to examine T cell receptors. We raised the lower bounds on T cell receptor repertoires beta chain richness in healthy humans, and examined how T cell receptor diversity relates to age
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Useful Links
Associated Subjects
Algorithms Artificial intelligence Computerassisted instruction Education Educational technology Equality Game theory Graphical modeling (Statistics) Inference Instructional systems Internet in education Knowledge representation (Information theory) Maximum entropy method Schools Soft computing Technology Uncertainty (Information theory)
Alternative Names
Dafne Kollere
Daphne Koller amerykańska informatyczka
Daphne Koller informatica statunitense
Daphne Koller informatica uit Israël
Daphne Koller informaticienne américaine
Daphne Koller profesora estadounidense de informática
Daphne Koller USamerikanische Informatikerin
Koller, Daphne
Дафна Коллер
Дафни Колер
Դաֆնա Կոլլեր
דפנה קולר
دافنی کالر
دافنی کالەر
達芙妮·科勒
Languages
Covers