Koller, DaphneOverview
Publication Timeline
Most widely held works about
Daphne Koller
Most widely held works by
Daphne Koller
Probabilistic graphical models : principles and techniques
by Daphne Koller
(
Book
)
11 editions published between 2009 and 2011 in English and held by 384 WorldCat member libraries worldwide Proceedings of the annual Conference on Uncertainty in Artificial Intelligence, available for 1991present. Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for exchanging results on the use of principled uncertainreasoning methods in intelligent systems. The UAI Proceedings have become a basic reference for researches and practitioners who want to know about both theoretical advances and the latest applied developments in the field
Uncertainty in artificial intelligence : proceedings of the seventeenth conference (2001), August 25, 2001, University of Washington, Seattle, Washington
by Conference on Uncertainty in Artificial Intelligence
(
Book
)
3 editions published in 2001 in English and held by 34 WorldCat member libraries worldwide
From knowledge to belief
by Daphne Koller
(
Book
)
4 editions published between 1993 and 1994 in English and held by 7 WorldCat member libraries worldwide
Adaptive probabilistic networks
by Stuart J Russell
(
Book
)
2 editions published in 1994 in English and held by 7 WorldCat member libraries worldwide
Representation dependence in probabilistic inference
by Joseph Y Halpern
(
Book
)
1 edition published in 1995 in English and held by 5 WorldCat member libraries worldwide Abstract: "Nondeductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. This is generally viewed as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any probabilistic inference system that sanctions certain important patterns of reasoning, such as minimal default assumption of independence, must suffer from representation dependence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation dependence and other desiderata."
Asymptotic conditional probabilities : the nonunary case
by Adam Grove
(
Book
)
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide Abstract: "Motivated by problems that arise in computing degree of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences [symbol] and [theta], we consider the structures with domain [1 ..., N] that satisfy [theta], and compute the fraction of them in which [symbol] is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown by Liogon'kiĭ [Lio69], if there is a nonunary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We extend this result to show that asymptotic conditional probabilities do not always exist for any reasonable notion of limit. Liogon'kiĭ also showed that the problem of deciding whether the limit exists is undecidable. We analyze the complexity of three problems with respect to this limit: deciding whether it is welldefined, whether it exists, and whether it lies in some nontrivial interval. Matching upper and lower bounds are given for all three problems, showing them to be highly undecidable."
A machine vision based system for guiding lanechange maneuvers
by Jitendra Malik
(
Book
)
2 editions published in 1995 in English and held by 2 WorldCat member libraries worldwide
On the complexity of twoperson zerosum games in extensive form
by Daphne Koller
(
Book
)
1 edition published in 1990 in English and held by 2 WorldCat member libraries worldwide
Alignment of cryoelectron tomography images using Markov Random Fields
by Fernando Amat Gil
(
)
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide CryoElectron tomography (CET) is the only imaging technology capable of visualizing the 3D organization of intact bacterial whole cells at nanometer resolution in situ. However, quantitative image analysis of CET datasets is extremely challenging due to very low signal to noise ratio (well below 0dB), missing data and heterogeneity of biological structures. In this thesis, we present a probabilistic framework to align CET images in order to improve resolution and create structural models of different biological structures. The alignment problem of 2D and 3D CET images is cast as a Markov Random Field (MRF), where each node in the graph represents a landmark in the image. We connect pairs of nodes based on local spatial correlations and we find the "best'' correspondence between the two graphs. In this correspondence problem, the "best'' solution maximizes the probability score in the MRF. This probability is the product of singleton potentials that measure image similarity between nodes and the pairwise potentials that measure deformations between edges. Wellknown approximate inference algorithms such as Loopy Belief Propagation (LBP) are used to obtain the "best'' solution. We present results in two specific applications: automatic alignment of tilt series using fiducial markers and subtomogram alignment. In the first case we present RAPTOR, which is being used in several labs to enable real highthroughput tomography. In the second case our approach is able to reach the contrast transfer function limit in low SNR samples from whole cells as well as revealing atomic resolution details invisible to the naked eye through nanogold labeling
Efficient computation of equilibria for extensive twoperson games
by Daphne Koller
(
Book
)
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide Abstract: "The Nash equilibria of a twoperson, nonzerosum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, it is first necessary to convert the game to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. Hence, finding equilibria of extensive games typically implies exponential blowup in terms of both time and space. In this paper we suggest an alternative approach, based on the sequence form of the game. For a game with perfect recall, the sequence form is a linear sized strategic description, which results in an LCP of linear size. For this LCP, we show that an equilibrium can be found efficiently by Lemke's algorithm, a generalization of the LemkeHowson method."
Random worlds and maximum entropy
by Adam Grove
(
Book
)
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide Abstract: "Given a knowledge base KB containing firstorder and statistical facts, we consider a principled method, called the random worlds method, for computing a degree of belief that some formula [symbol] holds given KB. If the domain has size N, then we can consider all possible worlds, or firstorder models, with domain [1 ..., N] that satisfy KB, and compute the fraction of them in which [symbol] is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying [symbol] and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics (e.g., [Jay78]) and artificial intelligence (e.g., [PV89, Sha89]), but is far more general. Of equal interest to the actual results themselves are the numerous subtle issues we must address when formulating it. For languages with binary predicate symbols, the randomworlds method continues to make sense, but there no longer seems to be any useful connection to maximum entropy. It is difficult to see how maximum entropy can be applied at all. In fact, results from [GHK93a] show that even generalizations of maximum entropy are unlikely to be useful. These observations suggest unexpected limitations to the applicability of maximum entropy methods."
Probabilistic models for regionbased scene understanding
by Stephen Gould
(
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide One of the longterm goals of computer vision is to be able to understand the world through visual images. This daunting task involves reasoning simultaneously about objects, regions and 3D geometry. Traditionally, computer vision research has tackled these tasks is isolation: independent detectors for finding objects, image segmentation algorithms for defining regions, and specialized monocular depth perception methods for reconstructing geometry. Unfortunately, this isolated reasoning can lead to inconsistent interpretations of the scene. In this thesis we develop a unified probabilistic model that avoids these inconsistencies. We introduce a regionbased representation of the scene in which pixels are grouped together to form consistent regions. Each region is then annotated with a semantic and geometric class label. Next, we extend our representation to include the concept of objects, which can be comprised of multiple regions. Finally, we show how our regionbased representation can be used to interpret the 3D structure of the scene. Importantly, we model the scene using a coherent probabilistic model over random variables defined by our regionbased representation. This enforces consistency between tasks and allows contextual dependencies to be modeled across tasks, e.g., that sky should be above the horizon, and ground below it. Finally, we present an efficient algorithm for performing inference in our model, and demonstrate stateoftheart results on a number of standard tasks
Unsupervised feature learning via sparse hierarchical representations
by Honglak Lee
(
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide Machine learning has proved a powerful tool for artificial intelligence and data mining problems. However, its success has usually relied on having a good feature representation of the data, and having a poor representation can severely limit the performance of learning algorithms. These feature representations are often handdesigned, require significant amounts of domain knowledge and human labor, and do not generalize well to new domains. To address these issues, I will present machine learning algorithms that can automatically learn good feature representations from unlabeled data in various domains, such as images, audio, text, and robotic sensors. Specifically, I will first describe how efficient sparse coding algorithms  which represent each input example using a small number of basis vectors  can be used to learn good lowlevel representations from unlabeled data. I also show that this gives feature representations that yield improved performance in many machine learning tasks. In addition, building on the deep learning framework, I will present two new algorithms, sparse deep belief networks and convolutional deep belief networks, for building more complex, hierarchical representations, in which more complex features are automatically learned as a composition of simpler ones. When applied to images, this method automatically learns features that correspond to objects and decompositions of objects into objectparts. These features often lead to performance competitive with or better than highly handengineered computer vision algorithms in object recognition and segmentation tasks. Further, the same algorithm can be used to learn feature representations from audio data. In particular, the learned features yield improved performance over stateoftheart methods in several speech recognition tasks
Restricted Bayes Optimal Classifiers
(
)
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
PCLASSIC: A Tractable Probablistic Description Logic
(
)
1 edition published in 1997 in Undetermined and held by 1 WorldCat member library worldwide
Osteoporosis pathophysiology
(
)
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
Learning structured probabilistic models for semantic role labeling
by David Terrell Vickrey
(
)
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide Teaching a computer to read is one of the most interesting and important artificial intelligence tasks. In this thesis, we focus on semantic role labeling (SRL), one important processing step on the road from raw text to a full semantic representation. Given an input sentence and a target verb in that sentence, the SRL task is to label the semantic arguments, or roles, of that verb. For example, in the sentence "Tom eats an apple, " the verb "eat" has two roles, Eater = "Tom" and Thing Eaten = "apple". Most SRL systems, including the ones presented in this thesis, take as input a syntactic analysis built by an automatic syntactic parser. SRL systems rely heavily on path features constructed from the syntactic parse, which capture the syntactic relationship between the target verb and the phrase being classified. However, there are several issues with these path features. First, the path feature does not always contain all relevant information for the SRL task. Second, the space of possible path features is very large, resulting in very sparse features that are hard to learn. In this thesis, we consider two ways of addressing these issues. First, we experiment with a number of variants of the standard syntactic features for SRL. We include a large number of syntactic features suggested by previous work, many of which are designed to reduce sparsity of the path feature. We also suggest several new features, most of which are designed to capture additional information about the sentence not included in the standard path feature. We build an SRL model using the best of these new and old features, and show that this model achieves performance competitive with current stateoftheart. The second method we consider is a new methodology for SRL based on labeling canonical forms. A canonical form is a representation of a verb and its arguments that is abstracted away from the syntax of the input sentence. For example, "A car hit Bob" and "Bob was hit by a car" have the same canonical form, {Verb = "hit", Deep Subject = "a car", Deep Object = "a car"}. Labeling canonical forms makes it much easier to generalize between sentences with different syntax. To label canonical forms, we first need to automatically extract them given an input parse. We develop a system based on a combination of handcoded rules and machine learning. This allows us to include a large amount of linguistic knowledge and also have the robustness of a machine learning system. Our system improves significantly over a strong baseline, demonstrating the viability of this new approach to SRL. This latter method involves learning a large, complex probabilistic model. In the model we present, exact learning is tractable, but there are several natural extensions to the model for which exact learning is not possible. This is quite a general issue; in many different application domains, we would like to use probabilistic models that cannot be learned exactly. We propose a new method for learning these kinds of models based on contrastive objectives. The main idea is to learn by comparing only a few possible values of the model, instead of all possible values. This method generalizes a standard learning method, pseudolikelihood, and is closely related to another, contrastive divergence. Previous work has mostly focused on comparing nearby sets of values; we focus on nonlocal contrastive objectives, which compare arbitrary sets of values. We prove several theoretical results about our model, showing that contrastive objectives attempt to enforce probability ratio constraints between the compared values. Based on this insight, we suggest several methods for constructing contrastive objectives, including contrastive constraint generation (CCG), a cuttingplane style algorithm that iteratively builds a good contrastive objective based on finding highscoring values. We evaluate CCG on a machine vision task, showing that it significantly outperforms pseudolikelihood, contrastive divergence, as well as a stateoftheart maxmargin cuttingplane algorithm
Advances in neural information processing systems 21 : 22nd Annual Conference on Neural Information Processing Systems 2008; December 8  10, 2008, Vancouver, B. C., Canada; [proceedings]
(
Book
)
1 edition published in 2009 in English and held by 1 WorldCat member library worldwide
Accelerating chemical similarity search using GPUs and metric embeddings
by Imran Saeedul Haque
(
)
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide Fifteen years ago, the advent of modern highthroughput sequencing revolutionized computational genetics with a flood of data. Today, highthroughput biochemical assays promise to make biochemistry the next datarich domain for machine learning. However, existing computational methods, built for small analyses of about 1,000 molecules, do not scale to emerging multimillion molecule datasets. For many algorithms, pairwise similarity comparisons between molecules are a critical bottleneck, presenting a 1,000x1,000,000x scaling barrier. In this dissertation, I describe the design of SIML and PAPER, our GPU implementations of 2D and 3D chemical similarities, as well as SCISSORS, our metric embedding algorithm. On a model problem of interest, combining these techniques allows up to 274,000x speedup in time and up to 2.8 millionfold reduction in space while retaining excellent accuracy. I further discuss how these highspeed techniques have allowed insight into chemical shape similarity and the behavior of machine learning kernel methods in the presence of noise
From knowledge to belief
by Stanford University
(
Book
)
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide We use techniques from finite model theory to analyze the computational aspects of random worlds. The problem of computing degrees of belief is undecidable in general. However, for unary knowledge bases, a tight connection to the principle of maximum entropy often allows us to compute degrees of belief more
fewer
Audience Level
Related Identities
Associated Subjects
Algorithms Artificial intelligence Automobile drivingSteeringAutomation AutomobilesAutomatic control Computerassisted instruction Computer vision Education Educational technology Equality Game theory Graphical modeling (Statistics) Inference Instructional systems Internet in education Knowledge representation (Information theory) Maximum entropy method Schools Soft computing Technology Uncertainty (Information theory)

Alternative Names
Koller, Daphne
Languages
Covers
