# Hughes Aircraft Corporation Artificial Intelligence Center

Overview

Works: | 4 works in 4 publications in 1 language and 0 library holdings |
---|---|

Classifications: | Q335.M41, |

Publication Timeline

.

Most widely held works by
Hughes Aircraft Corporation

Using recurrent networks for dimensionality reduction by
M. J Jones(
Book
)

1 edition published in 1992 in English and held by 0 WorldCat member libraries worldwide

This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop

1 edition published in 1992 in English and held by 0 WorldCat member libraries worldwide

This report explores how recurrent neural networks can be exploited for learning high-dimensional mappings. Since recurrent networks are as powerful as Turing machines, an interesting question is how recurrent networks can be used to simplify the problem of learning from examples. The main problem with learning high-dimensional functions is the curse of dimensionality which roughly states that the number of examples needed to learn a function increases exponentially with input dimension. This thesis proposes a way of avoiding this problem by using a recurrent network to decompose a high-dimensional function into many lower dimensional functions connected in a feedback loop

Extensions of a theory of networks for approximation and learning : outliers and negative examples by
Federico Girosi(
Book
)

1 edition published in 1990 in English and held by 0 WorldCat member libraries worldwide

1 edition published in 1990 in English and held by 0 WorldCat member libraries worldwide

Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering by
Tomaso Poggio(
Book
)

1 edition published in 1990 in English and held by 0 WorldCat member libraries worldwide

The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\alpha$: {\it moving centers} and {\it adjustable norm-weight}

1 edition published in 1990 in English and held by 0 WorldCat member libraries worldwide

The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\alpha$: {\it moving centers} and {\it adjustable norm-weight}

A theory of networks for approximation and learning by
Tomaso Poggio(
Book
)

1 edition published in 1989 in English and held by 0 WorldCat member libraries worldwide

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data

1 edition published in 1989 in English and held by 0 WorldCat member libraries worldwide

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data

Related Identities

- Jones, Michael J. (Michael Jeffrey) 1968- Author
- National Science Foundation (U.S.)
- Girosi, Federico Author
- Massachusetts Institute of Technology Artificial Intelligence Laboratory
- Alfred P. Sloan Foundation
- Whitaker College of Health Sciences, Technology, and Management Center for Biological Information Processing
- Caprile, Bruno
- United States Office of Naval Research
- United States Advanced Research Projects Agency
- United States Office of Naval Research Cognitive and Neural Sciences Division

Languages