WorldCat Identities

Ng, Andrew Y. 1976-

Overview
Works: 29 works in 33 publications in 1 language and 44 library holdings
Genres: Academic theses  Conference papers and proceedings 
Roles: Thesis advisor, Author
Classifications: HD9490.5, 629.892
Publication Timeline
.
Most widely held works by Andrew Y Ng
Shaping and policy search in reinforcement learning by Andrew Y Ng( )

3 editions published in 2003 in English and held by 9 WorldCat member libraries worldwide

To make reinforcement learning algorithms run in a reasonable amount of time, it is frequently necessary to use a well-chosen reward function that gives appropriate "hints" to the learning algorithm. But, the selection of these hints--called shaping rewards--often entails significant trial and error, and poorly chosen shaping rewards often change the problem in unanticipated ways that cause poor solutions to be learned. In this dissertation, we give a theory of reward shaping that shows how these problems can be eliminated. This theory further gives guidelines for selecting good shaping rewards that in practice give significant speedups of the learning process. We also show that shaping can allow us to use "myopic" learning algorithms and still do well
Uncertainty in artificial intelligence : proceedings of the Twenty-fifth Conference (2009), June 18-21, 2009, Montreal, Quebec by Conference on Uncertainty in Artificial Intelligence( Book )

2 editions published in 2009 in English and held by 4 WorldCat member libraries worldwide

Low-dimensional neural features reflect central features of muscle activation by Zuley Rivera Alvidrez( )

1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide

Any time we move, our brains solve the difficult problem of translating our motor intentions to muscle commands. Understanding how this computation takes place, and in particular, what role the motor cortex plays in movement generation, has been a central issue in systems neuroscience that remains unresolved. In this thesis, we took an unconventional approach to the analysis of cortical neural activity and its relationship to executed movements. We used dimensionality reduction to extract the salient patterns of neural population activity, and related those to the muscle activity patterns generated during arm reaches to a grid of targets. We found that salient neural activity patterns appeared to tightly reflect muscle activity patterns with a biologically-plausible lag. We also applied our analyses to movements that were planned before being executed, and found that a muscle-framework view of the cortical activity was consistent with previously-described predictions of movement kinematics based on the state of the cortical population activity. Overall, our results elucidate remarkable simplicity of the motor-cortical activity at the population level, despite the complexity and heterogeneity of individual cell's activities
Simultaneous mapping and localization with sparse extended information filters : theory and initial results by Sebastian Thrun( Book )

2 editions published in 2002 in English and held by 2 WorldCat member libraries worldwide

This paper describes a scalable algorithm for the simultaneous localization and mapping (SLAM) problem. SLAM is the problem of determining the location of environmental features with a roving robot. Many of today's popular techniques are based on extended Kalman filters (EKFs), which require update time quadratic in the number of features in the map. This paper develops the notion of sparse extended information filters (SEIFs) as a new method for solving the SLAM problem. SEIFs exploit structure inherent in the SLAM problem, representing maps through local, Web-like networks of features. By doing so, updates can be performed in constant time, irrespective of the number of features in the map. This paper presents several original constant-time results of SEIFs, and provides simulation results that show the high accuracy of the resulting maps in comparison to the computationally more cumbersome EKF solution
Towards clinically viable neural prosthetic systems by Vikash Gilja( )

1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide

By restoring the ability to move and communicate with the world, brain machine interfaces (BMIs) offer the potential to improve quality of life for people suffering from spinal cord injury, stroke, or neurodegenerative diseases, such as amyotrophic lateral sclerosis (ALS). BMIs attempt to translate measured neural signals into the user's intentions and, subsequently, control a computer or actuator. Recently, compelling examples of intra-cortical BMIs have been demonstrated in tetraplegic patients. Although these studies provide a powerful proof-of-concept, clinical viability is impeded by limited performance and robustness over short (hours) and long (days) timescales. We address performance and robustness over short time periods by approaching BMIs as a systems level design problem. We identify key components of the system and design a novel BMI from a feedback control perspective. In this perspective, the brain is the controller of a new plant, defined by the BMI, and the actions of this BMI are witnessed by the user. This simple perspective leads to design advances that result in significant qualitative and quantitative performance improvements. Through online closed loop experiments, we show that this BMI is capable of producing continuous endpoint movements that approach native limb performance and can operate continuously for hours. We also demonstrate how this system can be operated across days by a bootstrap procedure with the potential to eliminate an explicit recalibration step. To examine the use of BMIs over longer timescales, we develop new electrophysiology tools that allow for continuous multi-day neural recording. Through application of this technology, we measure the signal acquisition stability (and instability) of the electrode array technology used in current BMI clinical trials. We also demonstrate how these systems can be used to study BMI decoding over longer time periods. In this demonstration, we present a simple methodology for switching BMI systems on and off at appropriate times. The algorithms and methods demonstrated can be run with existing low power application specific integrated circuits (ASICs), with a defined path towards the development of a fully implantable neural interface system. We believe that these advances are a step towards clinical viability and, with careful user interface design, neural prosthetic systems can be translated into real world solutions
Computational recognition of protein-coding genes using multiple genomic alignments by Samuel Solomon Gross( )

1 edition published in 2010 in English and held by 2 WorldCat member libraries worldwide

In this thesis, I describe three main contributions I have made toward creating more accurate systems for the computational recognition of protein-coding genes. First, I present N-SCAN, a gene predictor based on a hidden Markov model that uses Bayesian networks to model multiple alignments. I also describe CONTRAST, a discriminative gene predictor based on a conditional random field and a set of support vector machines for recognizing coding region boundaries. Both N-SCAN and CONTRAST represented substantial improvements over the state-of-the-art at the time they were introduced. Additionally, I give an algorithm for training conditional random fields that maximizes an approximation to labelwise accuracy, as opposed to the usual maximum likelihood approach. This algorithm proved key to CONTRAST's success
Inducing event schemas and their participants from unlabeled text by Nathanael William Chambers( )

1 edition published in 2011 in English and held by 1 WorldCat member library worldwide

The majority of information on the Internet is expressed in written text. Understanding and extracting this information is crucial to building intelligent systems that can organize this knowledge, but most algorithms focus on learning atomic facts and relations. For instance, we can reliably extract facts like "Stanford is a University" and "Professors teach Science" by observing redundant word patterns across a corpus. However, these facts do not capture richer knowledge like the way detonating a bomb is related to destroying a building, or that the perpetrator who was convicted must have been arrested. A structured model of these events and entities is needed to understand language across many genres, including news, blogs, and even social media. This dissertation describes a new approach to knowledge acquisition and extraction that learns rich structures of events (e.g., plant, detonate, destroy) and participants (e.g., suspect, target, victim) over a large corpus of news articles, beginning from scratch and without human involvement. As opposed to early event models in Natural Language Processing (NLP) such as scripts and frames, modern statistical approaches and advances in NLP now enable new representations and large-scale learning over many domains. This dissertation begins by describing a new model of events and entities called Narrative Event Schemas. A Narrative Event Schema is a collection of events that occur together in the real world, linked by the typical entities involved. I describe the representation itself, followed by a statistical learning algorithm that observes chains of entities repeatedly connecting the same sets of events within documents. The learning process extracts thousands of verbs within schemas from 14 years of newspaper data. I present novel contributions in the field of temporal ordering to build classifiers that order the events and infer likely schema orderings. I then present several new evaluations for the extracted knowledge. Finally, I apply Narrative Event Schemas to the field of Information Extraction, learning templates of events with sets of semantic roles. Most Information Extraction approaches assume foreknowledge of the domain's templates, but I instead start from scratch and learn schemas as templates, and then extract the entities from text as in a standard extraction task. My algorithm is the first to learn templates without human guidance, and its results approach those of supervised algorithms
Probabilistic models for region-based scene understanding by Stephen Gould( )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

One of the long-term goals of computer vision is to be able to understand the world through visual images. This daunting task involves reasoning simultaneously about objects, regions and 3D geometry. Traditionally, computer vision research has tackled these tasks is isolation: independent detectors for finding objects, image segmentation algorithms for defining regions, and specialized monocular depth perception methods for reconstructing geometry. Unfortunately, this isolated reasoning can lead to inconsistent interpretations of the scene. In this thesis we develop a unified probabilistic model that avoids these inconsistencies. We introduce a region-based representation of the scene in which pixels are grouped together to form consistent regions. Each region is then annotated with a semantic and geometric class label. Next, we extend our representation to include the concept of objects, which can be comprised of multiple regions. Finally, we show how our region-based representation can be used to interpret the 3D structure of the scene. Importantly, we model the scene using a coherent probabilistic model over random variables defined by our region-based representation. This enforces consistency between tasks and allows contextual dependencies to be modeled across tasks, e.g., that sky should be above the horizon, and ground below it. Finally, we present an efficient algorithm for performing inference in our model, and demonstrate state-of-the-art results on a number of standard tasks
Link Analysis, Eigenvectors and Stability( )

1 edition published in 2001 in Undetermined and held by 1 WorldCat member library worldwide

Demystifying unsupervised feature learning by Adam Paul Coates( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Machine learning is a key component of state-of-the-art systems in many application domains. Applied to many kinds of raw data, however, most learning algorithms are unable to make good predictions. In order to succeed, most learning algorithms are applied instead to "features" that represent higher-level concepts extracted from the raw data. These features, developed by expert practitioners in each field, encode important prior knowledge about the task that the learning algorithm would be unable to discover on its own from (often limited) labeled training examples. Unfortunately, engineering good feature representations for new applications is extremely difficult. For the most challenging applications in AI, like computer vision, the search for good features and higher-level image representations is vast and ongoing. In this work we study a class of algorithms that attempt to learn feature representations automatically from unlabeled data that is often easy to obtain in large quantities. Though many such algorithms have been proposed and have achieved high marks on benchmark tasks, it has not been fully understood what causes some algorithms to perform well and others to perform poorly. It has thus been difficult to identify any key directions in which the algorithms might be improved in order to significantly advance the state of the art. To address this issue, we will present results from an in-depth scientific study of a variety of factors that can affect the performance of feature-learning algorithms. Through a detailed analysis, a surprising picture emerges: we find that many schemes succeed or fail as a result of a few (easily overlooked) factors that are often orthogonal to the particular learning methods involved. In fact, by focusing solely on these factors it is possible to achieve state-of-the-art performance on common benchmarks using quite simple algorithms. More importantly, however, a main contribution of this line of research has been to identify very simple yet highly scalable feature learning methods that, by virtue of focusing on the most critical properties identified in our study, are highly successful in many settings: the proposed algorithms consistently achieve top performance on benchmarks, have been successfully deployed in realistic computer vision applications, and are even capable of discovering high-level concepts like object classes without any supervision
On feature selection : learning with exponentially many irrelevant features as training examples by Andrew Y Ng( Book )

1 edition published in 1998 in English and held by 1 WorldCat member library worldwide

We consider feature selection for supervised machine learning in the "wrapper" model of feature selection. This typically involves an NP-hard optimization problem that is approximated by heuristic search for a "good" feature subset. First considering the idealization where this optimization is performed exactly, we give a rigorous bound for generalization error under feature selection. The search heuristics typically used are then immediately seen as trying to achieve the error given in our bounds, and succeeding to the extent that they succeed in solving the optimization. The bound suggests that, in the presence of many "irrelevant" features, the main somce of error in wrapper model feature selection is from "overfitting" hold-out or cross-validation data. This motivates a new algorithm that, again under the idealization of performing search exactly, has sample complexity ( and error) that grows logarithmically in the number of "irrelevant" features - which means it can tolerate having a number of "irrelevant" features exponential in the number of training examples - and search heuristics are again seen to be directly trying to reach this bound. Experimental results on a problem using simulated data show the new algorithm having much higher tolerance to irrelevant features than the standard wrapper model. Lastly, we also discuss ramifications that sample complexity logarithmic in the number of irrelevant features might have for feature design in actual applications of learning
Meeting environment related trade challenges : a WWF perspective by Ginny Ng( )

1 edition published in 2003 in English and held by 1 WorldCat member library worldwide

Latent variable models for visual activity understanding by Benjamin Packer( )

1 edition published in 2015 in English and held by 1 WorldCat member library worldwide

One of the important goals of computer vision is to categorize and understand human actions in images and video. The ability to automatically solve this problem opens the door to a host of impactful applications such as search and retrieval, surveillance, medical research, automatic annotation, and human-computer interfaces. Approaches for action recognition have involved increasingly rich modeling with hidden variables, which are often required to capture the properties of and interactions between components in a scene that distinguish one action from another. Learning models with hidden variables accurately and efficiently can be a difficult problem that poses great computational challenges. In this work, we address two related problems: developing computational methods to accurately and efficiently learn models with latent variables from data, and constructing models of sufficient richness to solve high-level human action recognition tasks. To address the first problem, we turn to Self-Paced Learning, which is designed to avoid bad local minima while learning latent variable models. We show that we can use Self-Paced Learning in combination with data with varying levels of annotation to achieve superior levels of performance. To address the second problem, we propose a latent variable model that explicitly represents human pose, object trajectories, and the interactions between them in video sequences. Since labeling all of these components in training data is onerous and such labels are not available in test data, the model uses latent variables and takes advantage of data with varying levels of annotation. It also takes advantage of recent progress in both the quality of combined video and depth sensors and the accuracy of pose trackers that are based on these measurements. With these technologies in hand, we are able to leverage accurate pose trajectories in our model without the need for any additional annotation or human intervention. By combining a pose-aware action model with successful discriminative techniques in a single joint model, we are able to recognize complex, fine-grained human action involving the manipulation of objects in realistic action sequences. For our adaptation of Self-Paced Learning to diversely and noisily labeled datasets, we demonstrate that we can improve on the results of a state-of-the-art action recognition technique with still images by augmenting a labeled dataset with images gathered from the internet without any annotation. Furthermore, to showcase both the ability of our human action model to capture complex human actions and the efficacy of our learning approach, we introduce a novel Cooking Action Dataset and show that our model outperforms existing state-of-the-art techniques
Applying Online Search Techniques to Continuous-State Reinforcement Learning( )

1 edition published in 1998 in Undetermined and held by 1 WorldCat member library worldwide

Solving uncertain Markov decision processes by J. Andrew Bagnell( Book )

1 edition published in 2001 in English and held by 1 WorldCat member library worldwide

Abstract: "The authors consider the fundamental problem of finding good policies in uncertain models. It is demonstrated that although the general problem of finding the best policy with respect to the worst model is NP-hard, in the special case of a convex uncertainty set the problem is tractable. A stochastic dynamic game is proposed, and the security equilibrium solution of the game is shown to correspond to the value function under the worst model and the optimal controller. The authors demonstrate that the uncertain model approach can be used to solve a class of nearly Markovian Decision Problems, providing lower bounds on performance in stochastic models with higher-order interactions. The framework considered establishes connections between and generalizes paradigms of stochastic optimal, mini-max, and H[subscript infinity]/robust control. Applications are considered, including robustness in reinforcement learning, planning in nearly Markovian decision processes, and bounding error due to sensor discretization in noisy, continuous state-spaces."
Learning and control with inaccurate models by Jeremy Kolter( )

1 edition published in 2010 in English and held by 1 WorldCat member library worldwide

A key challenge in applying model-based Reinforcement Learning and optimal control methods to complex dynamical systems, such as those arising in many robotics tasks, is the difficulty of obtaining an accurate model of the system. These algorithms perform very well when they are given or can learn an accurate dynamics model, but often times it is very challenging to build an accurate model by any means: effects such as hidden or incomplete state, dynamic or unknown system elements, and other effects, can render the modeling task very difficult. This work presents methods for dealing with such situations, by proposing algorithms that can achieve good performance on control tasks even using only \emph{inaccurate} models of the system. In particular, we present three algorithmic contributions in this work that exploit inaccurate system models in different ways: we present an approximate policy gradient method, based on an approximation we call the Signed Derivative, that can perform well provided only that the sign of certain model derivative terms are known; we present a method for using a distribution over possible inaccurate models to identify a linear subspace of control policies that perform well in all models, then learn a member of this subspace on the real system; finally, we propose an algorithm for integrating previously observed trajectories with inaccurate models in a probabilistic manner, achieving better performance than is possible with either element alone. In addition to these algorithmic contributions, a central focus of this thesis is the application of these methods to challenging robotic domains, extending the state of the art. The methods have enabled a quadruped robot to cross a wide variety of challenging terrain, using a combination of slow static walking, dynamic trotting gaits, and dynamic jumping maneuvers. We also apply these methods to a full-sized autonomous car, where they enable it to execute a "powerslide'' into a narrow parking spot, one of the most challenging maneuvers demonstrated on an autonomous car. Both these domains represent highly challenging robotics tasks where the dynamical system is difficult to model, and our methods demonstrate that we can attain excellent performance on these tasks even without an accurate model of the system
Recursive deep learning for natural language processing and computer vision by Richard Socher( )

1 edition published in 2014 in English and held by 1 WorldCat member library worldwide

As the amount of unstructured text data that humanity produces overall and on the Internet grows, so does the need to intelligently process it and extract different types of knowledge from it. My research goal in this thesis is to develop learning models that can automatically induce representations of human language, in particular its structure and meaning in order to solve multiple higher level language tasks. There has been great progress in delivering technologies in natural language processing such as extracting information, sentiment analysis or grammatical analysis. However, solutions are often based on different machine learning models. My goal is the development of general and scalable algorithms that can jointly solve such tasks and learn the necessary intermediate representations of the linguistic units involved. Furthermore, most standard approaches make strong simplifying language assumptions and require well designed feature representations. The models in this thesis address these two shortcomings. They provide effective and general representations for sentences without assuming word order independence. Furthermore, they provide state of the art performance with no, or few manually designed features. The new model family introduced in this thesis is summarized under the term Recursive Deep Learning. The models in this family are variations and extensions of unsupervised and supervised recursive neural networks which generalize deep and feature learning ideas to hierarchical structures. The RNN models of this thesis obtain state of the art performance on paraphrase detection, sentiment analysis, relation classification, parsing, image-sentence mapping and knowledge base completion, among other tasks
Structure and Dynamics of Diffusion Networks by Manuel Gomez Rodriguez( )

1 edition published in 2013 in English and held by 1 WorldCat member library worldwide

Diffusion of information, ideas, behaviors and diseases are ubiquitous in nature and modern society. One of the main goals of this dissertation is to shed light on the hidden underlying structure of diffusion. To this aim, we developed flexible probabilistic models and inference algorithms that make minimal assumptions about the physical, biological or cognitive mechanisms responsible for diffusion. We avoid modeling the mechanisms underlying individual activations, and instead develop a data-driven approach which uses only the visible temporal traces diffusion generates. We first developed two algorithms, NetInf and MultiTree, that infer the network structure or skeleton over which diffusion takes place. However, both algorithms assume networks to be static and diffusion to occur at equal rates across different edges. We then developed NetRate, an algorithm that allows for static and dynamic networks with different rates across different edges. NetRate infers not only the network structure but also the rate of every edge. Finally, we develop a general theoretical framework of diffusion based on survival theory. Our models and algorithms provide computational lenses for understanding the structure and temporal dynamics that govern diffusion and may help towards forecasting, influencing and retarding diffusion, broadly construed. As an application, we study information propagation in the online media space. We find that the information network of media sites and blogs tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them. Information pathways for general recurrent topics are more stable across time than for on-going news events. Clusters of news media sites and blogs often emerge and vanish in matter of days for on-going news events. Major social movements and events involving civil population, such as the Libyan's civil war or Syria's uprise, lead to an increased amount of information pathways among blogs as well as in the overall increase in the network centrality of blogs and social media sites. Additionally, we apply our probabilistic framework of diffusion to the influence maximization problem and develop the algorithm MaxInf. Experiments on synthetic and real diffusion networks show that our algorithm outperforms other state of the art algorithms by considering the temporal dynamics of diffusion
Hardware and software systems for personal robots by Morgan Lewis Quigley( )

1 edition published in 2012 in English and held by 1 WorldCat member library worldwide

Robots play a major role in precision manufacturing, continually performing economically justifiable tasks with superhuman speed and reliability. In contrast, deployments of advanced personal robots in home or office environments have been stymied by difficult hardware and software challenges. Among many others, these challenges have included cost, reliability, perceptual capability, and software interoperability. This thesis will describe a series of hardware and software systems designed in response to these challenges and towards the long-range goal of creating general-purpose robots that will be useful and practical in everyday environments. First, several low-cost robot subsystems will be described, including systems for indoor localization, short-range object recognition, and inertial joint encoding, as demonstrated on prototype low-cost manipulators. Next, the design of a low-cost, highly capable robotic hand will be described in detail, which incorporates all of the aforementioned hardware and software subsystems. Finally, the thesis will describe a robot software system developed for the STanford AI Robot (STAIR) project, and its evolution into the Robot Operating System (ROS), a widely used robot software framework designed to ease collaboration between disparate research communities to create integrative, embodied AI systems
Semantic taxonomy induction from heterogenous evidence( )

1 edition published in 2006 in Undetermined and held by 1 WorldCat member library worldwide

 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.80 (from 0.53 for Solving un ... to 0.93 for Uncertaint ...)

Alternative Names
Andrew Ng

Andrew Ng American artificial intelligence researcher

Andrew Ng Amerikaans kunstmatige intelligentie-onderzoeker

Andrew Ng Amerikanischer KI-Forscher

Andrew Ng informatico statunitense

Andrew Y. Ng

Andrew Y. Ng American artificial intelligence researcher

Andrew Y. Ng Amerikanischer KI-Forscher

Endru Ng

Ендрю Ин Американський дослідник в галузі штучного інтелекту

Ендрју Нг

Нг, Эндрю

اندرو ان‌جی

أندرو نج

吴恩达 美籍华裔人工智能科学家

Languages
English (21)