PLEASE NOTE: This Identity has been been retired and is no longer being kept current.
Poggio, Tomaso
Overview
Works:  109 works in 185 publications in 1 language and 234 library holdings 

Roles:  Editor 
Classifications:  Q335.M41, 152.1402854 
Publication Timeline
.
Most widely held works by
Tomaso Poggio
An optimal scale for edge detection by
Massachusetts Institute of Technology(
Book
)
3 editions published in 1988 in English and held by 6 WorldCat member libraries worldwide
Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\lambda $ and compare it to the Gaussian filter. A formula relating the signaltonoise ratio to the parameter $\lambda $ is derived from regularization analysis for the case of small values of $\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise
3 editions published in 1988 in English and held by 6 WorldCat member libraries worldwide
Many problems in early vision are ill posed. Edge detection is a typical example. This paper applies regularization techniques to the problem of edge detection. We derive an optimal filter for edge detection with a size controlled by the regularization parameter $\lambda $ and compare it to the Gaussian filter. A formula relating the signaltonoise ratio to the parameter $\lambda $ is derived from regularization analysis for the case of small values of $\lambda$. We also discuss the method of Generalized Cross Validation for obtaining the optimal filter scale. Finally, we use our framework to explain two perceptual phenomena: coarsely quantized images becoming recognizable by either blurring or adding noise
Visual integration and detection of discontinuities : the key role of intensity edges by
Ed Gamble(
Book
)
2 editions published in 1987 in English and held by 6 WorldCat member libraries worldwide
Integration of several vision modules is likely to be one of the keys to the power and robustness of the human visual system. The problem of integrating early vision cues is also emerging as a central problem in current computer vision research. This paper suggests that integration is best performed at the location of discontinuities in early processes, such as discontinuities in image brightness, depth, motion, texture and color. Coupled Markov Random Field models, based on Bayes estimation techniques, can be used to combine vision modalities with their discontinuities. These models generate algorithms that map naturally onto parallel finegrained architectures such as the Connection Machine. Derived a scheme to integrate intensity edges with stereo depth and motion field information and show results on synthetic and natural images. The use of intensity edges to integrate other visual cues and to help discover discontinuities emerges as a general and powerful principle
2 editions published in 1987 in English and held by 6 WorldCat member libraries worldwide
Integration of several vision modules is likely to be one of the keys to the power and robustness of the human visual system. The problem of integrating early vision cues is also emerging as a central problem in current computer vision research. This paper suggests that integration is best performed at the location of discontinuities in early processes, such as discontinuities in image brightness, depth, motion, texture and color. Coupled Markov Random Field models, based on Bayes estimation techniques, can be used to combine vision modalities with their discontinuities. These models generate algorithms that map naturally onto parallel finegrained architectures such as the Connection Machine. Derived a scheme to integrate intensity edges with stereo depth and motion field information and show results on synthetic and natural images. The use of intensity edges to integrate other visual cues and to help discover discontinuities emerges as a general and powerful principle
Priors, stabilizers and basis functions: from regularization to radial, tensor and additive splines by
Federico Girosi(
Book
)
4 editions published in 1993 in English and held by 6 WorldCat member libraries worldwide
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type
4 editions published in 1993 in English and held by 6 WorldCat member libraries worldwide
We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type
A theory of networks for approximation and learning by
Massachusetts Institute of Technology(
Book
)
7 editions published between 1989 and 1994 in English and held by 6 WorldCat member libraries worldwide
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data
7 editions published between 1989 and 1994 in English and held by 6 WorldCat member libraries worldwide
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques. This paper considers the problems of an exact representation of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Function (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces intriguing analogies with neurobiological data
Synthesis of visual modules from examples : learning hyperacuity by
Massachusetts Institute of Technology(
Book
)
3 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
For any given visual competence, it is tempting to conjecture a specific algorithm and a corresponding neural circuitry. It has been often implicitly assumed that this machinery may be hardwired in the brain. This extreme point of view, if taken seriously, amy quickly lead to absurd consequences. The underlying reason for the spectacular performance of human subjects in these tasks is that the information sampled by the photoreceptors and relayed to the brain does contain the information necessary for precise localization of image features, since the spacing between photoreceptors and the eye's optics satisfy (in the fovea) the constraints of the sampling theorem. More specifically, it has been shown that, in principle, spatial mechanisms that account for grating resolution are sensitive enough to support hyperacuitylevel performance. Furthermore, some of the hyperacuity tasks can be solved by detecting 'secondary' cues such as luminance difference (as in the bisection task) or orientation (as in the detection of vertical vernier stimuli). The detailed structure of the neural circuitry that subserves the detection of these cues, or hyperacuity performance in other tasks is, however, unknown
3 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
For any given visual competence, it is tempting to conjecture a specific algorithm and a corresponding neural circuitry. It has been often implicitly assumed that this machinery may be hardwired in the brain. This extreme point of view, if taken seriously, amy quickly lead to absurd consequences. The underlying reason for the spectacular performance of human subjects in these tasks is that the information sampled by the photoreceptors and relayed to the brain does contain the information necessary for precise localization of image features, since the spacing between photoreceptors and the eye's optics satisfy (in the fovea) the constraints of the sampling theorem. More specifically, it has been shown that, in principle, spatial mechanisms that account for grating resolution are sensitive enough to support hyperacuitylevel performance. Furthermore, some of the hyperacuity tasks can be solved by detecting 'secondary' cues such as luminance difference (as in the bisection task) or orientation (as in the detection of vertical vernier stimuli). The detailed structure of the neural circuitry that subserves the detection of these cues, or hyperacuity performance in other tasks is, however, unknown
Parallel Algorithms for Computer Vision by Tomaso Poggio(
Book
)
5 editions published between 1987 and 1990 in English and held by 5 WorldCat member libraries worldwide
The main effort in this project has been directed towards the development of an integrated vision system,  the Vision Machine  based on a parallel supercomputer. The core of the Vision Machine is in fact a set of parallel algorithms for visual recognition and navigation in an unstructured environment. The present version of the Vision Machine has been demonstrated to process images in close to real time by (1) computing first several lowlevel cues, such as edges, stereo disparity, optical flow, color and texture, (2) integrating them to extract a cartoonlike description of the scene in terms of the physical discontinuities of surfaces, and (3) using this cartoon in a recognition stage, based on parallel model matching. In addition to the development of the parallel algorithms, their implementation and testing, we have also done substantial work in several areas that are very closely related. These include (1) design and fabrication of VLSI circuits to transfer to potentially cheap and fast hardware some of the software algorithms, (2) initial development of techniques to synthesize by learning vision algorithms, and (3) several projects involving autonomous navigation of small robots
5 editions published between 1987 and 1990 in English and held by 5 WorldCat member libraries worldwide
The main effort in this project has been directed towards the development of an integrated vision system,  the Vision Machine  based on a parallel supercomputer. The core of the Vision Machine is in fact a set of parallel algorithms for visual recognition and navigation in an unstructured environment. The present version of the Vision Machine has been demonstrated to process images in close to real time by (1) computing first several lowlevel cues, such as edges, stereo disparity, optical flow, color and texture, (2) integrating them to extract a cartoonlike description of the scene in terms of the physical discontinuities of surfaces, and (3) using this cartoon in a recognition stage, based on parallel model matching. In addition to the development of the parallel algorithms, their implementation and testing, we have also done substantial work in several areas that are very closely related. These include (1) design and fabrication of VLSI circuits to transfer to potentially cheap and fast hardware some of the software algorithms, (2) initial development of techniques to synthesize by learning vision algorithms, and (3) several projects involving autonomous navigation of small robots
Bringing the grandmother back into the picture: a memorybased view of object recognition by
Shimon Edelman(
Book
)
7 editions published in 1990 in English and Undetermined and held by 5 WorldCat member libraries worldwide
Experiments are described with a versatile pictorial prototype based learning scheme for 3D object recognition. The GRBF scheme seems to be amenable to realization in biophysical hardware because the only kind of computation it involves can be effectively carried out by combining receptive fields. Furthermore, the scheme is computationally attractive because it brings together the old notion of a grandmother cell and the rigorous approximation methods of regularization and splines. Keywords: Object recognition, Representation, Nonlinear interpolation. (JHD)
7 editions published in 1990 in English and Undetermined and held by 5 WorldCat member libraries worldwide
Experiments are described with a versatile pictorial prototype based learning scheme for 3D object recognition. The GRBF scheme seems to be amenable to realization in biophysical hardware because the only kind of computation it involves can be effectively carried out by combining receptive fields. Furthermore, the scheme is computationally attractive because it brings together the old notion of a grandmother cell and the rigorous approximation methods of regularization and splines. Keywords: Object recognition, Representation, Nonlinear interpolation. (JHD)
Continuous stochastic cellular automata that have a stationary distribution and no detailed balance by Tomaso Poggio(
Book
)
2 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
Abstract: "Marroquin and Ramirez (1990) have recently discovered a class of discrete stochastic cellular automata with Gibbsian invariant measure that have a nonreversible dynamic behavior. Practical applications include more powerful algorithms than the Metropolis algorithm to compute MRF models. In this paper we describe a large class of stochastic dynamical systems that has a Gibbs asymptotic distribution but does not satisfy reversibility. We characterize sufficient properties of a subclass of stochastic differential equations in terms of the associated FokkerPlanck equation for the existence of an asymptotic probability distribution in the system of coordinates which is given. Practical implications include VLSI analog circuits to compute coupled MRF models."
2 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
Abstract: "Marroquin and Ramirez (1990) have recently discovered a class of discrete stochastic cellular automata with Gibbsian invariant measure that have a nonreversible dynamic behavior. Practical applications include more powerful algorithms than the Metropolis algorithm to compute MRF models. In this paper we describe a large class of stochastic dynamical systems that has a Gibbs asymptotic distribution but does not satisfy reversibility. We characterize sufficient properties of a subclass of stochastic differential equations in terms of the associated FokkerPlanck equation for the existence of an asymptotic probability distribution in the system of coordinates which is given. Practical implications include VLSI analog circuits to compute coupled MRF models."
A trainable pedestrian detection system by Constantine Papgeorgiou(
)
2 editions published in 1998 in English and held by 5 WorldCat member libraries worldwide
2 editions published in 1998 in English and held by 5 WorldCat member libraries worldwide
Extensions of a theory of networks for approximation and learning : outliers and negative examples by
Federico Girosi(
Book
)
4 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
Learning an input output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of threelayer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. These two extensions are interesting also from the point of view of the approximation of multivariate functions. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden
4 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
Learning an input output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of threelayer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. These two extensions are interesting also from the point of view of the approximation of multivariate functions. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden
Some comments on a recent theory of stereopsis by
David Marr(
Book
)
2 editions published in 1980 in English and held by 5 WorldCat member libraries worldwide
2 editions published in 1980 in English and held by 5 WorldCat member libraries worldwide
Marr's approach to vision by Tomaso Poggio(
Book
)
4 editions published in 1981 in English and held by 5 WorldCat member libraries worldwide
4 editions published in 1981 in English and held by 5 WorldCat member libraries worldwide
A regularized solution to edge detection by Tomaso Poggio(
Book
)
1 edition published in 1985 in English and held by 5 WorldCat member libraries worldwide
The authors consider edge detection as the problem of measuring and localizing changes of light intensity in the image. Edge detection does not have a precisely defined goal. The word edge itself, which refers to physical properties of objects, is somewhat of a misnomer. Several years of experience have shown that the ideal goal of detecting and locating physical edges in the surfaces being imaged is very difficult and still out of reach. Edge detection has come to be defined as the first step in this goal of detecting physical changes such as object boundariesthe operation of detecting and locating changes in intensity in the image. Other processes which operate on these measurements of intensity changes will then group boundaries and label and characterize them in terms of the properties of the 3D surfaces. Intended in this narrow sense, edge detectionthis first step in processing the imageis mainly the process that measures, detects and localizes changes of intensity. Derivatives must be estimated correctly to label the critical points in the image intensity array, characterize their local properties (are they minima or maxima or saddle points?) and thus relate them to the underlying physical process(are they shadow edges or depth discontinuities?)
1 edition published in 1985 in English and held by 5 WorldCat member libraries worldwide
The authors consider edge detection as the problem of measuring and localizing changes of light intensity in the image. Edge detection does not have a precisely defined goal. The word edge itself, which refers to physical properties of objects, is somewhat of a misnomer. Several years of experience have shown that the ideal goal of detecting and locating physical edges in the surfaces being imaged is very difficult and still out of reach. Edge detection has come to be defined as the first step in this goal of detecting physical changes such as object boundariesthe operation of detecting and locating changes in intensity in the image. Other processes which operate on these measurements of intensity changes will then group boundaries and label and characterize them in terms of the properties of the 3D surfaces. Intended in this narrow sense, edge detectionthis first step in processing the imageis mainly the process that measures, detects and localizes changes of intensity. Derivatives must be estimated correctly to label the critical points in the image intensity array, characterize their local properties (are they minima or maxima or saddle points?) and thus relate them to the underlying physical process(are they shadow edges or depth discontinuities?)
A theory of how the brain might work by Tomaso Poggio(
Book
)
2 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
The main points of the theory are: the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. these [sic] modules are realized as HyperBF networks (Poggio and Girosi, 1990a, b). Hyper BF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as lookup tables. I will conclude with some speculations about the tradeoff between memory and computation and the evolution of intelligence."
2 editions published in 1990 in English and held by 5 WorldCat member libraries worldwide
The main points of the theory are: the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. these [sic] modules are realized as HyperBF networks (Poggio and Girosi, 1990a, b). Hyper BF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as lookup tables. I will conclude with some speculations about the tradeoff between memory and computation and the evolution of intelligence."
Retinal ganglion cells : a functional interpretation of dendritic morphology by C Koch(
Book
)
1 edition published in 1982 in English and held by 4 WorldCat member libraries worldwide
1 edition published in 1982 in English and held by 4 WorldCat member libraries worldwide
Recognition and structure from one 2D model view : observations on prototypes, object classes and symmetries by Tomaso Poggio(
Book
)
2 editions published in 1992 in English and held by 4 WorldCat member libraries worldwide
The paper is organized in two distinct parts. In the first part, we discuss how to exploit prior knowledge of an object's symmetry. We prove that for any bilaterally symmetric 3D object one nonaccidental 2D model view is sufficient for recognition. We also prove that for bilaterally symmetric objects the correspondence of four points between two views determines the correspondence of all other points. Symmetries of higher order allow the recovery of structure from one 2D view. In the second part of the paper, we study a very simple type of object classes that we call linear object classes. Linear transformations can be learned exactly from a small set of examples in the case of linear object classes and used to produce new views of an object from a single view
2 editions published in 1992 in English and held by 4 WorldCat member libraries worldwide
The paper is organized in two distinct parts. In the first part, we discuss how to exploit prior knowledge of an object's symmetry. We prove that for any bilaterally symmetric 3D object one nonaccidental 2D model view is sufficient for recognition. We also prove that for bilaterally symmetric objects the correspondence of four points between two views determines the correspondence of all other points. Symmetries of higher order allow the recovery of structure from one 2D view. In the second part of the paper, we study a very simple type of object classes that we call linear object classes. Linear transformations can be learned exactly from a small set of examples in the case of linear object classes and used to produce new views of an object from a single view
3D object recognition : symmetry and virtual views by Thomas Vetter(
Book
)
2 editions published in 1992 in English and held by 4 WorldCat member libraries worldwide
Under special conditions, a single nonaccidental 'model' view is theoretically sufficient for recognition of novel views, if the object is bilaterally symmetric, whereas the theoretical minimum (under the same conditions) for a nonsymmetric object is two views. In practice, we expect that the 'virtual' views provided by the symmetry property will facilitate human recognition of novel views. Psychophysical experiments confirm that humans are better in the recognition of symmetric objects. The hypothesis of symmetryinduced virtual views together with a network model that successfully accounts for human recognition of generic 3D objects leads to predictions that we have verified with psychophysical experiments."
2 editions published in 1992 in English and held by 4 WorldCat member libraries worldwide
Under special conditions, a single nonaccidental 'model' view is theoretically sufficient for recognition of novel views, if the object is bilaterally symmetric, whereas the theoretical minimum (under the same conditions) for a nonsymmetric object is two views. In practice, we expect that the 'virtual' views provided by the symmetry property will facilitate human recognition of novel views. Psychophysical experiments confirm that humans are better in the recognition of symmetric objects. The hypothesis of symmetryinduced virtual views together with a network model that successfully accounts for human recognition of generic 3D objects leads to predictions that we have verified with psychophysical experiments."
A connection between GRBF and MLP by Minoru Maruyama(
Book
)
2 editions published between 1991 and 1992 in English and held by 4 WorldCat member libraries worldwide
In the remainder of the paper, we discuss the relation between the radial functions that correspond to the sigmoid for normalized inputs and wellbehaved radial basis functions, such as the Gaussian. In particular, we observe that the radial function associated with the sigmoid is an activation function that is good approximation to Gaussian basis functions for a range of values of the bias parameter. The implication is that a MLP network can always simulate a Gaussian GRBF network (with the same number of units but less parameters); the converse is true only for certain values of the bias parameter. Numerical experiments indicate that this constraint is not always satisfied in practice by MLP networks trained with backpropagation
2 editions published between 1991 and 1992 in English and held by 4 WorldCat member libraries worldwide
In the remainder of the paper, we discuss the relation between the radial functions that correspond to the sigmoid for normalized inputs and wellbehaved radial basis functions, such as the Gaussian. In particular, we observe that the radial function associated with the sigmoid is an activation function that is good approximation to Gaussian basis functions for a range of values of the bias parameter. The implication is that a MLP network can always simulate a Gaussian GRBF network (with the same number of units but less parameters); the converse is true only for certain values of the bias parameter. Numerical experiments indicate that this constraint is not always satisfied in practice by MLP networks trained with backpropagation
Optical flow from 1D correlation : application to a simple timetocrash detector by
Massachusetts Institute of Technology(
Book
)
3 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
A new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute an estimation of optical flow field. Sparse measurements are used to compute qualitative properties of the flow for different visual tasks. We can combine our technique with a scheme for detecting expansion or rotation in an algorithm which suggests interesting biological implications. The algorithm provides a rough estimate of timetocrash. It is wellsuited for VLSI implementations. It was tested on real image sequences. We show its performance and compare the results to previous approaches
3 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
A new technique exploiting 1D correlation of 2D or even 1D patches between successive frames may be sufficient to compute an estimation of optical flow field. Sparse measurements are used to compute qualitative properties of the flow for different visual tasks. We can combine our technique with a scheme for detecting expansion or rotation in an algorithm which suggests interesting biological implications. The algorithm provides a rough estimate of timetocrash. It is wellsuited for VLSI implementations. It was tested on real image sequences. We show its performance and compare the results to previous approaches
Microelectronics in nerve cells : dendritic morphology and information processing by Tomaso Poggio(
Book
)
2 editions published in 1981 in English and held by 4 WorldCat member libraries worldwide
2 editions published in 1981 in English and held by 4 WorldCat member libraries worldwide
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Massachusetts Institute of Technology Artificial Intelligence Laboratory
 MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB
 Girosi, Federico
 Whitaker College of Health Sciences, Technology, and Management Center for Biological Information Processing
 Edelman, Shimon
 MASSACHUSETTS INST OF TECH CAMBRIDGE MA CENTER FOR BIOLOGICAL AND COMPUTATIONAL LEARNING
 Marr, David 19451980
 Koch, C.
 Whitaker College Center for Biological Information Processing
 Hurlbert, Anya
Associated Subjects
Algae Approximation theory Artificial intelligence Automotive sensors Computer vision Digital mapping Image processing Machine learning Machine theory Mathematical optimization Neural circuitry Neural networks (Computer science) Optical pattern recognition Retina Robots Splines Stochastic systems Vision VisionData processing Visual cortex Visual perception
Languages