WorldCat Identities

Koller, Daphne

Overview
Works: 78 works in 98 publications in 1 language and 616 library holdings
Genres: Conference proceedings 
Roles: Thesis advisor
Classifications: QA279.5, 519.5420285
Publication Timeline
Key
Publications about  Daphne Koller Publications about Daphne Koller
Publications by  Daphne Koller Publications by Daphne Koller
Most widely held works about Daphne Koller
 
Most widely held works by Daphne Koller
Probabilistic graphical models : principles and techniques by Daphne Koller ( Book )
11 editions published between 2009 and 2011 in English and held by 392 WorldCat member libraries worldwide
Proceedings of the annual Conference on Uncertainty in Artificial Intelligence, available for 1991-present. Since 1985, the Conference on Uncertainty in Artificial Intelligence (UAI) has been the primary international forum for exchanging results on the use of principled uncertain-reasoning methods in intelligent systems. The UAI Proceedings have become a basic reference for researches and practitioners who want to know about both theoretical advances and the latest applied developments in the field
Uncertainty in artificial intelligence : proceedings of the seventeenth conference (2001), August 2-5, 2001, University of Washington, Seattle, Washington by Jack Breese ( Book )
3 editions published in 2001 in English and held by 34 WorldCat member libraries worldwide
From knowledge to belief by Daphne Koller ( Book )
5 editions published between 1993 and 1994 in English and held by 7 WorldCat member libraries worldwide
Adaptive probabilistic networks by Stuart J Russell ( Book )
2 editions published in 1994 in English and held by 7 WorldCat member libraries worldwide
Representation dependence in probabilistic inference by Joseph Y Halpern ( Book )
1 edition published in 1995 in English and held by 5 WorldCat member libraries worldwide
Abstract: "Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. This is generally viewed as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any probabilistic inference system that sanctions certain important patterns of reasoning, such as minimal default assumption of independence, must suffer from representation dependence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation dependence and other desiderata."
Asymptotic conditional probabilities : the unary case by Adam Grove ( Book )
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
Abstract: "Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order sentences. Given first-order sentences [symbol] and [theta], we consider the structures with domain [1 ..., N] that satisfy [theta], and compute the fraction of them in which [symbol] is true. We then consider what happens to this fraction as N gets large. This extends the work on 0-1 laws that considers the limiting probability of first-order sentences, by considering asymptotic conditional probabilities. As shown in [Lio69, GHK93], in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that [theta] can use a nonunary predicate symbol. Liogon'kiì† [Lio69] shows that if we condition on formulas [theta] involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on [symbol]. We extend this result here to the case where [theta] involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem."
Alignment of cryo-electron tomography images using Markov Random Fields by Fernando Amat Gil ( )
1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide
Cryo-Electron tomography (CET) is the only imaging technology capable of visualizing the 3D organization of intact bacterial whole cells at nanometer resolution in situ. However, quantitative image analysis of CET datasets is extremely challenging due to very low signal to noise ratio (well below 0dB), missing data and heterogeneity of biological structures. In this thesis, we present a probabilistic framework to align CET images in order to improve resolution and create structural models of different biological structures. The alignment problem of 2D and 3D CET images is cast as a Markov Random Field (MRF), where each node in the graph represents a landmark in the image. We connect pairs of nodes based on local spatial correlations and we find the "best'' correspondence between the two graphs. In this correspondence problem, the "best'' solution maximizes the probability score in the MRF. This probability is the product of singleton potentials that measure image similarity between nodes and the pairwise potentials that measure deformations between edges. Well-known approximate inference algorithms such as Loopy Belief Propagation (LBP) are used to obtain the "best'' solution. We present results in two specific applications: automatic alignment of tilt series using fiducial markers and subtomogram alignment. In the first case we present RAPTOR, which is being used in several labs to enable real high-throughput tomography. In the second case our approach is able to reach the contrast transfer function limit in low SNR samples from whole cells as well as revealing atomic resolution details invisible to the naked eye through nanogold labeling
On the complexity of two-person zero-sum games in extensive form by Daphne Koller ( Book )
1 edition published in 1990 in English and held by 2 WorldCat member libraries worldwide
Random worlds and maximum entropy by Adam Grove ( Book )
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide
Abstract: "Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random- worlds method, for computing a degree of belief that some formula [symbol] holds given KB. If the domain has size N, then we can consider all possible worlds, or first-order models, with domain [1 ..., N] that satisfy KB, and compute the fraction of them in which [symbol] is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying [symbol] and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics (e.g., [Jay78]) and artificial intelligence (e.g., [PV89, Sha89]), but is far more general. Of equal interest to the actual results themselves are the numerous subtle issues we must address when formulating it. For languages with binary predicate symbols, the random-worlds method continues to make sense, but there no longer seems to be any useful connection to maximum entropy. It is difficult to see how maximum entropy can be applied at all. In fact, results from [GHK93a] show that even generalizations of maximum entropy are unlikely to be useful. These observations suggest unexpected limitations to the applicability of maximum entropy methods."
A machine vision based system for guiding lane-change maneuvers by Jitendra Malik ( Book )
2 editions published in 1995 in English and held by 2 WorldCat member libraries worldwide
Efficient computation of equilibria for extensive two-person games by Daphne Koller ( Book )
1 edition published in 1994 in English and held by 2 WorldCat member libraries worldwide
Abstract: "The Nash equilibria of a two-person, non-zero-sum game are the solutions of a certain linear complementarity problem (LCP). In order to use this for solving a game in extensive form, it is first necessary to convert the game to a strategic description such as the normal form. The classical normal form, however, is often exponentially large in the size of the game tree. Hence, finding equilibria of extensive games typically implies exponential blowup in terms of both time and space. In this paper we suggest an alternative approach, based on the sequence form of the game. For a game with perfect recall, the sequence form is a linear sized strategic description, which results in an LCP of linear size. For this LCP, we show that an equilibrium can be found efficiently by Lemke's algorithm, a generalization of the Lemke-Howson method."
Accelerating chemical similarity search using GPUs and metric embeddings by Imran Saeedul Haque ( )
1 edition published in 2011 in English and held by 1 WorldCat member library worldwide
Fifteen years ago, the advent of modern high-throughput sequencing revolutionized computational genetics with a flood of data. Today, high-throughput biochemical assays promise to make biochemistry the next data-rich domain for machine learning. However, existing computational methods, built for small analyses of about 1,000 molecules, do not scale to emerging multi-million molecule datasets. For many algorithms, pairwise similarity comparisons between molecules are a critical bottleneck, presenting a 1,000x-1,000,000x scaling barrier. In this dissertation, I describe the design of SIML and PAPER, our GPU implementations of 2D and 3D chemical similarities, as well as SCISSORS, our metric embedding algorithm. On a model problem of interest, combining these techniques allows up to 274,000x speedup in time and up to 2.8 million-fold reduction in space while retaining excellent accuracy. I further discuss how these high-speed techniques have allowed insight into chemical shape similarity and the behavior of machine learning kernel methods in the presence of noise
Restricted Bayes Optimal Classifiers ( )
1 edition published in 2000 in Undetermined and held by 1 WorldCat member library worldwide
Probabilistic models for region-based scene understanding by Stephen Gould ( )
1 edition published in 2010 in English and held by 1 WorldCat member library worldwide
One of the long-term goals of computer vision is to be able to understand the world through visual images. This daunting task involves reasoning simultaneously about objects, regions and 3D geometry. Traditionally, computer vision research has tackled these tasks is isolation: independent detectors for finding objects, image segmentation algorithms for defining regions, and specialized monocular depth perception methods for reconstructing geometry. Unfortunately, this isolated reasoning can lead to inconsistent interpretations of the scene. In this thesis we develop a unified probabilistic model that avoids these inconsistencies. We introduce a region-based representation of the scene in which pixels are grouped together to form consistent regions. Each region is then annotated with a semantic and geometric class label. Next, we extend our representation to include the concept of objects, which can be comprised of multiple regions. Finally, we show how our region-based representation can be used to interpret the 3D structure of the scene. Importantly, we model the scene using a coherent probabilistic model over random variables defined by our region-based representation. This enforces consistency between tasks and allows contextual dependencies to be modeled across tasks, e.g., that sky should be above the horizon, and ground below it. Finally, we present an efficient algorithm for performing inference in our model, and demonstrate state-of-the-art results on a number of standard tasks
Real-time human pose tracking from range data by Hariraam Varun Ganapathi ( )
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
Real-time human pose tracking enables a variety of applications, including intuitive human-machine interaction, smart surveillance, character animation, virtual reality, gaming, and physical therapy. Traditional approaches have required multiple cameras and special suits with markers, rendering these techniques impractical for most consumer applications. Consequently, recent research has focused on marker-less human pose estimation using only a single camera. We approach this problem with a single consumer-grade range camera, which is an active sensor that measures distance at each pixel. While direct distance measurements greatly assist the reconstruction problem, these cameras are low resolution and noisy compared to traditional color cameras, and self-occlusion causes ambiguities when attempting to reconstruct pose from a single view. This thesis presents a new real-time human pose tracking algorithm based on a generative model of the range camera measurement process. Bayes' theorem is applied to find the maximum a-posteriori estimate of the pose given the measured range image and priors on human shape. The resulting non-convex optimization problem is difficult to solve, often containing many plateaus and multiple local maxima. We address these difficulties with an algorithm comprised of an outer loop that proposes new poses based on a prior on motion and part detections from the observed image, and an inner loop that refines the pose to better match the observations. The latter refinement itself decomposes into two alternating phases; the first establishes correspondences between observations and the surface of the human model, while the second updates the pose given the correspondences via a constrained continuous optimization. For quantitative evaluation, a large dataset was collected using two types of range cameras and a traditional marker-based motion capture system. The presented algorithm is able to accurately track complicated full-body movements involving significant self-occlusion and fast motions on humans of different sizes and shapes without any explicit initialization. The algorithm runs at more than 30 frames per second on a consumer computer using about half of one processor core
Simultaneous Mapping and Localization with Sparse Extended Information Filters: Theory and Initial Results (revised) by Sebastian Thrun ( )
1 edition published in 2002 in English and held by 1 WorldCat member library worldwide
This paper describes a scalable algorithm for the simultaneous localization and mapping (SLAM) problem. SLAM is the problem of determining the location of environmental features with a roving robot. Many of today's popular techniques are based on extended Kalman filters (EKFs), which require update time quadratic in the number of features in the map. This paper develops the notion of sparse extended information filters (SEIFs) as a new method for solving the SLAM problem. SEIFs exploit structure inherent in the SLAM problem, representing maps through local, Web-like networks of features. By doing so, updates can be performed in constant time, irrespective of the number of features in the map. This paper presents several original constant-time results of SEIFs, and provides simulation results that show the high accuracy of the resulting maps in comparison to the computationally more cumbersome EKF solution
From knowledge to belief by Stanford University ( Book )
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
The random-worlds method induces degrees of belief from very rich knowledge bases, expressed in a language that augments first-order logic with statistical statements and default rules (interpreted as qualitative statistics). The method is based on the principle of indifference, treating all possible worlds as equally likely. It naturally derives important patterns of reasoning such as specificity, inheritance, indifference to irrelevant information, and a default assumption of independence. Its expressive power and intuitive semantics allow it to deal well with examples that are too complex for most other reasoning systems
Effective Bayesian Inference for Stochastic Programs ( )
1 edition published in 1997 in Undetermined and held by 1 WorldCat member library worldwide
P-CLASSIC: A Tractable Probablistic Description Logic ( )
1 edition published in 1997 in Undetermined and held by 1 WorldCat member library worldwide
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.71 (from 0.47 for Representa ... to 0.98 for A machine ...)
Alternative Names
Koller, Daphne
Languages
English (38)
Covers