skip to content

Hinton, Geoffrey E.

Works: 48 works in 149 publications in 2 languages and 3,206 library holdings
Genres: Conference papers and proceedings  Academic theses 
Roles: Editor, Author
Classifications: BF371, 006.3
Publication Timeline
Publications about Geoffrey E Hinton
Publications by Geoffrey E Hinton
Most widely held works about Geoffrey E Hinton
Most widely held works by Geoffrey E Hinton
Parallel models of associative memory ( Book )
32 editions published between 1981 and 2014 in English and held by 886 libraries worldwide
This update of the 1981 classic on neural networks includes new commentaries by the authors that show how the original ideas are related to subsequent developments. As researchers continue to uncover ways of applying the complex information processing abilities of neural networks, they give these models an exciting future which may well involve revolutionary developments in understanding the brain and the mind -- developments that may allow researchers to build adaptive intelligent machines. The original chapters show where the ideas came from and the new commentaries show where they are going
Unsupervised learning : foundations of neural computation by Geoffrey E Hinton( Book )
15 editions published between 1998 and 2001 in English and held by 255 libraries worldwide
This volume, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data
Proceedings of the 1988 Connectionist Models Summer School by 1988, Pittsburgh, Pa.) Connectionist Models Summer School (2( Book )
10 editions published between 1988 and 1989 in English and held by 173 libraries worldwide
Connectionist symbol processing by Geoffrey E Hinton( Book )
19 editions published between 1989 and 1991 in English and Undetermined and held by 147 libraries worldwide
Boltzmann machines : constraint satisfaction networks that learn by Geoffrey E Hinton( Book )
2 editions published in 1984 in English and held by 10 libraries worldwide
Connectionist learning procedures by Geoffrey E Hinton( Book )
6 editions published in 1987 in English and Undetermined and held by 10 libraries worldwide
A major goal of research on networks of neuron-like processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradient-descent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the the performance of the network. The strength is then adjusted in the direction that decrease the error. These relatively simple, gradient-descent learning procedures work well for small tasks and the new challenge is to find ways of improving the speed of learning so that they can be applied to larger, more realistic tasks
A time-delay neural network architecture for speech recognition by Kevin Lang( Book )
3 editions published in 1988 in English and held by 10 libraries worldwide
The time-delay architecture was developed on a subset of the alphabetic E-set, a task which is difficult because the distinguishing sounds are low in energy and short in duration. A system can only achieve good performance on the task by learning to ignore meaningless variations in the vowel and the background noise which are major constituents of the input patterns. The time-delay network learned to isolate and analyze the short consonant releases in the patterns without being told that these events were useful, or even where they were located."
Unsupervised learning of feature hierarchies by Marc'Aurelio Ranzato( file )
1 edition published in 2009 in English and held by 8 libraries worldwide
In particular, this work focuses on "deep learning" methods, a set of techniques and principles to train hierarchical models. Hierarchical models produce feature hierarchies that can capture complex non-linear dependencies among the observed data variables in a concise and efficient manner. After training, these models can be employed in real-time systems because they compute the representation by a very fast forward propagation of the input through a sequence of non-linear transformations. When the paucity of labeled data does not allow the use of traditional supervised algorithms, each layer of the hierarchy can be trained in sequence starting at the bottom by using unsupervised or semi-supervised algorithms. Once each layer has been trained, the whole system can be fine-tuned in an end-to-end fashion. We propose several unsupervised algorithms that can be used as building block to train such feature hierarchies. We investigate algorithms that produce sparse overcomplete representations and features that are invariant to known and learned transformations. These algorithms are designed using the Energy-Based Model framework and gradient-based optimization techniques that scale well on large datasets. The principle underlying these algorithms is to learn representations that are at the same time sparse, able to reconstruct the observation, and directly predictable by some learned mapping that can be used for fast inference in test time
A distributed connectionist production system by David S Touretzky( Book )
5 editions published between 1986 and 1987 in English and Undetermined and held by 8 libraries worldwide
DCPS is a connectionist production system interpreter that uses distributed representations. As a connectionist model it consists of many simple, richly interconnected neuron like computing units that cooperate to solve problems in parallel. One motivation for constructing DCPS was to demonstrate that connectionist models are capable of representing and using explicit rules. A second motivation was to show how coarse coding or distributed representations can be used to construct a working memory that requires far fewer units than the number of different facts that can potentially be stored. The simulation we present is intended as a detailed demonstration of the feasibility of certain ideas and should not be viewed as a full implementation of production systems. Our current model only has a few of the many interesting emergent properties that we eventually hope to demonstrate: it is damage resistant, it performs matching and variable binding by massively parallel constraint satisfaction, and the capacity of its working memory is dependent on the similarity of the items being stored
Relaxation and its role in vision by Geoffrey E Hinton( Archival Material )
7 editions published between 1977 and 1987 in English and held by 7 libraries worldwide
Special issue on connectionist symbol processing ( Book )
1 edition published in 1990 in English and held by 7 libraries worldwide
Distributed representations by Geoffrey E Hinton( Book )
4 editions published in 1984 in English and held by 6 libraries worldwide
Experiments on learning by back propagation by David C Plaut( Book )
2 editions published between 1986 and 1987 in English and held by 5 libraries worldwide
Rumelhart, Hinton and Williams (Rumelhart 86) describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in weight space . We give examples of the shape of the error surface for a typical task and illustrate how an acceleration method speeds up descent in weight space. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. We give some preliminary results on scaling and show how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results illustrate the effects on learning speed of the amount of interaction between the weights. A variation of the learning procedure that back-propagates desired state information rather than error gradients is developed and compared with the standard procedure. Finally, we discuss the relationship between our iterative networks and the analog networks described by Hopefield and Tank (Hopfield 85). The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search
Neural network architectures for artificial intelligence by Geoffrey E Hinton( Book )
2 editions published in 1988 in English and held by 4 libraries worldwide
[Rezension von:] Spies, Werner: Max Ernst : Collagen. - Verlag M. Dumont by Geoffrey E Hinton( Article )
1 edition published in 1976 in Undetermined and held by 3 libraries worldwide
Learning internal representations by error propagation by David E Rumelhart( Book )
2 editions published in 1985 in English and held by 3 libraries worldwide
This paper presents a generalization of the perception learning procedure for learning the correct sets of connections for arbitrary networks. The rule, falled the generalized delta rule, is a simple scheme for implementing a gradient descent method for finding weights that minimize the sum squared error of the sytem's performance. The major theoretical contribution of the work is the procedure called error propagation, whereby the gradient can be determined by individual units of the network based only on locally available information. The major empirical contribution of the work is to show that the problem of local minima not serious in this application of gradient descent. Keywords: Learning; networks; Perceptrons; Adaptive systems; Learning machines; and Back propagation
Le rappresentazioni distribuite by Geoffrey E Hinton( Article )
in Italian and held by 3 libraries worldwide
Max Ernst: "Les hommes n'en sauront rien" by Geoffrey E Hinton( Article )
1 edition published in 1975 in English and held by 3 libraries worldwide
Connectionist models 1990 : proceedings of the 1990 Summer School held at the University of California, San Diego ( Book )
1 edition published in 1991 in English and held by 2 libraries worldwide
Neural networks for real-world problems : Sunday, July 14, 1991, 2:00 pm-6 pm by Geoffrey E Hinton( Book )
2 editions published in 1991 in English and held by 1 library worldwide
moreShow More Titles
fewerShow Fewer Titles
Alternative Names
Geoffrey E. Hinton
Geoffrey Hinton britischer Wissenschaftler
Geoffrey Hinton Brits informaticus
Geoffrey Hinton informaticien britannique
Geoffrey Hinton informático teórico del Reino Unido
Hinton, G. E.
Hinton, G. E. (Geoffrey E.)
Hinton, Geoffrey
Хинтон, Джеффри
جفری اورست هینتون دانشمند علوم کامپیوتر بریتانیایی
เจฟฟรีย์ ฮินตัน
제프리 힌튼
English (113)
Italian (1)
Close Window

Please sign in to WorldCat 

Don't have an account? You can easily create a free account.