Girosi, F.
Overview
Works:  6 works in 8 publications in 1 language and 8 library holdings 

Classifications:  Q335.M41, 
Publication Timeline
.
Most widely held works by
F Girosi
A theory of networks for approximation and learning by
Tomaso Poggio(
Book
)
3 editions published in 1989 in English and Undetermined and held by 3 WorldCat member libraries worldwide
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data
3 editions published in 1989 in English and Undetermined and held by 3 WorldCat member libraries worldwide
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data
On the Relationship between Generalization Error, Hypothesis Complexity, and Sample Complexity for Radial Basis Functions by
Partha Niyogi(
Book
)
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem
Regularization algorithms for learning that are equivalent to multilayer networks by
Tomaso Poggio(
Book
)
1 edition published in 1989 in English and held by 1 WorldCat member library worldwide
1 edition published in 1989 in English and held by 1 WorldCat member library worldwide
Networks for approximation and learning by
Tomaso Poggio(
)
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
Networks and the best approximation property by
Federico Girosi(
Book
)
1 edition published in 1989 in English and held by 1 WorldCat member library worldwide
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis functions (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximation continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation
1 edition published in 1989 in English and held by 1 WorldCat member library worldwide
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis functions (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximation continuous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation
Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering by
Tomaso Poggio(
Book
)
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of threelayer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\alpha$: {\it moving centers} and {\it adjustable normweight}
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of threelayer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\alpha$: {\it moving centers} and {\it adjustable normweight}
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Languages