Kulkarni, Sanjeev R.
Overview
Works:  19 works in 48 publications in 1 language and 63 library holdings 

Roles:  Author 
Classifications:  Q370, 003.54 
Publication Timeline
.
Most widely held works by
Sanjeev R Kulkarni
On metric entropy, VapnikChervonenkis dimension, and learnability for a class of distributions by
Sanjeev R Kulkarni(
Book
)
5 editions published in 1989 in English and held by 6 WorldCat member libraries worldwide
A formal framework for distributionfree concept known as Valiant's learning framework has generated a great deal of interest. A fundamental result regarding this framework characterizes those concept classes which are learnable in terms of their VapnikChervonenkis (VC) dimension. More recently, learnability in this case was shown. Also a conjecture regarding learnability for a class of distributions was stated. In this report, we first point out that the condition for learnability for a fixed distribution is equivalent to the notion of finite metric entropy (which has been studied in other contexts). Some relationships between the VC dimension of a concept class and its metric entropy with respect to various distributions are then discussed. Finally, we prove some indication of when the set of learnable concept classes is enlarged by requiring learnability for only a class of distributions
5 editions published in 1989 in English and held by 6 WorldCat member libraries worldwide
A formal framework for distributionfree concept known as Valiant's learning framework has generated a great deal of interest. A fundamental result regarding this framework characterizes those concept classes which are learnable in terms of their VapnikChervonenkis (VC) dimension. More recently, learnability in this case was shown. Also a conjecture regarding learnability for a class of distributions was stated. In this report, we first point out that the condition for learnability for a fixed distribution is equivalent to the notion of finite metric entropy (which has been studied in other contexts). Some relationships between the VC dimension of a concept class and its metric entropy with respect to various distributions are then discussed. Finally, we prove some indication of when the set of learnable concept classes is enlarged by requiring learnability for only a class of distributions
A general classification rule for probability measures by
Ofer Zeitouni(
Book
)
5 editions published between 1991 and 1993 in English and held by 6 WorldCat member libraries worldwide
We consider the problem of classifying an unknown probability distribution based on a sequence of random samples drawn according to this distribution. Specifically, if A is a subset of the space of all probability measures M1(sigma) over some compact Polish space E, we want to decide whether or not the unknown distribution belongs to A or its complement. We propose an algorithm which leads a.s. to a correct decision for any A satisfying certain structural assumptions. A refined decision procedure is also presented which, given a countable collection Ai C M1(sigma), i = 1, 2, ... each satisfying the structural assumption, will eventually determine a.s. the membership of the distribution in any finite number of the Ai. Applications to density estimation and the problem of order determination of Markov processes are discussed
5 editions published between 1991 and 1993 in English and held by 6 WorldCat member libraries worldwide
We consider the problem of classifying an unknown probability distribution based on a sequence of random samples drawn according to this distribution. Specifically, if A is a subset of the space of all probability measures M1(sigma) over some compact Polish space E, we want to decide whether or not the unknown distribution belongs to A or its complement. We propose an algorithm which leads a.s. to a correct decision for any A satisfying certain structural assumptions. A refined decision procedure is also presented which, given a countable collection Ai C M1(sigma), i = 1, 2, ... each satisfying the structural assumption, will eventually determine a.s. the membership of the distribution in any finite number of the Ai. Applications to density estimation and the problem of order determination of Markov processes are discussed
Problems of computational and information complexity in machine vision and learning by
Sanjeev R Kulkarni(
Book
)
4 editions published in 1991 in English and held by 6 WorldCat member libraries worldwide
4 editions published in 1991 in English and held by 6 WorldCat member libraries worldwide
Some discrete approximations to a variational method for image segmentation by
Sanjeev R Kulkarni(
Book
)
3 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
Variational formulations have been proposed for a number of tasks in early vision. Discrete versions of these problems are closely related to Markov random field models and are typically used in implementing such methods. In particular, discrete and continuous versions for the problem of image segmentation have received considerable attention from both theoretical and algorithmic perspectives. It has been previously pointed out that the usual discrete version of the segmentation problem does not properly approximate the continuous formulation in the sense that the discrete solutions may not converge to a solution of the continuous problem as the lattice spacing tends to zero. One method for modifying the discrete formulations to ensure such convergence has been previously discussed. Here we consider two other partially discrete formulations which also satisfy desirable convergence properties in the continuum limit, and we discuss some general ideas about digitized versions of the variational formulation of the segmentation problem
3 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
Variational formulations have been proposed for a number of tasks in early vision. Discrete versions of these problems are closely related to Markov random field models and are typically used in implementing such methods. In particular, discrete and continuous versions for the problem of image segmentation have received considerable attention from both theoretical and algorithmic perspectives. It has been previously pointed out that the usual discrete version of the segmentation problem does not properly approximate the continuous formulation in the sense that the discrete solutions may not converge to a solution of the continuous problem as the lattice spacing tends to zero. One method for modifying the discrete formulations to ensure such convergence has been previously discussed. Here we consider two other partially discrete formulations which also satisfy desirable convergence properties in the continuum limit, and we discuss some general ideas about digitized versions of the variational formulation of the segmentation problem
Computational limitations of model based recognition by H Shvaytser(
Book
)
4 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
Reliable object recognition is an essential part of most visual systems. Model based approaches to object recognition use a database (a library) of modeled objects; for a given set of sensed data the problem of model based recognition is to identify and locate the objects from the library that are present in the data. We show that the complexity of model based recognition depends very heavily on the number of object models in the library even if each object is modeled by a small number of discrete features. Specifically, deciding whether a discrete set of sensed data can be interpreted as transformed object models from a given library is NPcomplete if the transformation is any combination of translation, rotation, scaling, and perspective projection. This suggests that efficient algorithms for model based recognition must use additional structure in order to avoid the inherent computational difficulties
4 editions published in 1991 in English and held by 5 WorldCat member libraries worldwide
Reliable object recognition is an essential part of most visual systems. Model based approaches to object recognition use a database (a library) of modeled objects; for a given set of sensed data the problem of model based recognition is to identify and locate the objects from the library that are present in the data. We show that the complexity of model based recognition depends very heavily on the number of object models in the library even if each object is modeled by a small number of discrete features. Specifically, deciding whether a discrete set of sensed data can be interpreted as transformed object models from a given library is NPcomplete if the transformation is any combination of translation, rotation, scaling, and perspective projection. This suggests that efficient algorithms for model based recognition must use additional structure in order to avoid the inherent computational difficulties
PAC learning with generalized samples and an application to stochastic geometry(
Book
)
3 editions published in 1991 in English and held by 4 WorldCat member libraries worldwide
In this paper, we introduce an extension of the standard PAC model which allows the use of generalized samples. We view a generalized sample as a pair consisting of a functional on the concept class together with the value obtained by the functional operating on the unknown concept. It appears that this model can be applied to a number of problems in signal processing and geometric reconstruction to provide sample size bounds under a PAC criterion. We consider a specific application of the generalized model to a problem of curve reconstruction, and discuss some connections with a result from stochastic geometry
3 editions published in 1991 in English and held by 4 WorldCat member libraries worldwide
In this paper, we introduce an extension of the standard PAC model which allows the use of generalized samples. We view a generalized sample as a pair consisting of a functional on the concept class together with the value obtained by the functional operating on the unknown concept. It appears that this model can be applied to a number of problems in signal processing and geometric reconstruction to provide sample size bounds under a PAC criterion. We consider a specific application of the generalized model to a problem of curve reconstruction, and discuss some connections with a result from stochastic geometry
An existence theorem and lattice approximations for a variational problem arising in computer vision by
Sanjeev R Kulkarni(
Book
)
3 editions published in 1989 in English and held by 4 WorldCat member libraries worldwide
3 editions published in 1989 in English and held by 4 WorldCat member libraries worldwide
Active learning using arbitrary binary valued queries by
Sanjeev R Kulkarni(
Book
)
3 editions published in 1990 in English and held by 4 WorldCat member libraries worldwide
The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on studying more powerful oracles and how they affect learnability. To find bounds on the improvement that can be expected from using oracles, we consider active learning in the sense that the learner has complete choice in the information received. Specifically, we allow the learner to ask arbitrary yes/no questions. We consider both active learning under a fixed distribution and distributionfree active learning. In the case of active learning, the underlying probability distribution is used only to measure distance between concepts. For learnability with respect to a fixed distribution, active learning does not enlarge the set of learnable concept classes, but can improve the sample complexity. For distributionfree learning, it is shown that a concept class is actively learnable iff it is finite, so that active learning is in fact less powerful than the usual passive learning model. We also consider a form of distributionfree learning in which the learner knows the distribution being used, so that 'distributionfree' refers only to the requirement that a bound on the number of queries can be obtained uniformly over all distributions. Even with the side information of the distribution being used, a concept class is actively learnable iff it has finite VC dimension, so that active learning with the side information still does not enlarge the set of learnable concept classes
3 editions published in 1990 in English and held by 4 WorldCat member libraries worldwide
The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on studying more powerful oracles and how they affect learnability. To find bounds on the improvement that can be expected from using oracles, we consider active learning in the sense that the learner has complete choice in the information received. Specifically, we allow the learner to ask arbitrary yes/no questions. We consider both active learning under a fixed distribution and distributionfree active learning. In the case of active learning, the underlying probability distribution is used only to measure distance between concepts. For learnability with respect to a fixed distribution, active learning does not enlarge the set of learnable concept classes, but can improve the sample complexity. For distributionfree learning, it is shown that a concept class is actively learnable iff it is finite, so that active learning is in fact less powerful than the usual passive learning model. We also consider a form of distributionfree learning in which the learner knows the distribution being used, so that 'distributionfree' refers only to the requirement that a bound on the number of queries can be obtained uniformly over all distributions. Even with the side information of the distribution being used, a concept class is actively learnable iff it has finite VC dimension, so that active learning with the side information still does not enlarge the set of learnable concept classes
Can one decide the type of the mean from the empirical measure by
Sanjeev R Kulkarni(
Book
)
3 editions published in 1990 in English and held by 4 WorldCat member libraries worldwide
3 editions published in 1990 in English and held by 4 WorldCat member libraries worldwide
Convex set estimation from support line measurements and applications to target reconstruction from laser radar data by
Avinash Shreedhar Lele(
Book
)
3 editions published in 1990 in English and held by 3 WorldCat member libraries worldwide
3 editions published in 1990 in English and held by 3 WorldCat member libraries worldwide
Minkowski content and lattice approximations for a variational problem by
Sanjeev R Kulkarni(
Book
)
2 editions published in 1988 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 1988 in English and held by 3 WorldCat member libraries worldwide
Universal estimation of information measures for analog sources by
Qing Wang(
)
2 editions published in 2009 in English and held by 3 WorldCat member libraries worldwide
This monograph presents an overview of universal estimation of information measures for continuousalphabet sources. Special attention is given to the estimation of mutual information and divergence based on independent and identically distributed (i.i.d.) data. Plugin methods, partitioningbased algorithms, nearestneighbor algorithms as well as other approaches are reviewed, with particular focus on consistency, speed of convergence and experimental performance
2 editions published in 2009 in English and held by 3 WorldCat member libraries worldwide
This monograph presents an overview of universal estimation of information measures for continuousalphabet sources. Special attention is given to the estimation of mutual information and divergence based on independent and identically distributed (i.i.d.) data. Plugin methods, partitioningbased algorithms, nearestneighbor algorithms as well as other approaches are reviewed, with particular focus on consistency, speed of convergence and experimental performance
Minimax lower bounds for the twoarmed bandit problem by
Sanjeev R Kulkarni(
Book
)
2 editions published in 1997 in English and held by 3 WorldCat member libraries worldwide
2 editions published in 1997 in English and held by 3 WorldCat member libraries worldwide
Models for Individual DecisionMaking with Social Feedback by Andrea Nedic(
Book
)
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
To investigate the influence of input from fellow group members in a constrained decisionmaking context, we develop four 2armed bandit tasks in which subjects freely select one of two options (A or B) and are informed of the resulting reward following each choice. Rewards are determined by the fraction x of past A choices by two functions fA(x), fB(x) (unknown to the subject) which intersect at a matching point x & tilde; that does not generally represent globallyoptimal behavior. Each task is designed to probe a different type of behavior, and subjects work in groups of five with feedback of other group members' choices, of their rewards, of both, or with no knowledge of others' behavior. We employ a softmax choice model that emerges from a driftdiffusion process, commonly used to model perceptual decision making with noisy stimuli. Here the stimuli are replaced by estimates of expected rewards produced by a temporaldifference reinforcementlearning algorithm, augmented to include appropriate feedback terms. Models are fitted for each task and feedback condition, and we use them to compare choice allocations averaged across subjects and individual choice sequences to highlight differences between tasks and intersubject differences. The most complex model, involving both choice and reward feedback, contains only four parameters, but nonetheless reveals significant differences in individual strategies. Strikingly, we find that rewards feedback can be either detrimental or advantageous to performance, depending upon the task
1 edition published in 2011 in English and held by 2 WorldCat member libraries worldwide
To investigate the influence of input from fellow group members in a constrained decisionmaking context, we develop four 2armed bandit tasks in which subjects freely select one of two options (A or B) and are informed of the resulting reward following each choice. Rewards are determined by the fraction x of past A choices by two functions fA(x), fB(x) (unknown to the subject) which intersect at a matching point x & tilde; that does not generally represent globallyoptimal behavior. Each task is designed to probe a different type of behavior, and subjects work in groups of five with feedback of other group members' choices, of their rewards, of both, or with no knowledge of others' behavior. We employ a softmax choice model that emerges from a driftdiffusion process, commonly used to model perceptual decision making with noisy stimuli. Here the stimuli are replaced by estimates of expected rewards produced by a temporaldifference reinforcementlearning algorithm, augmented to include appropriate feedback terms. Models are fitted for each task and feedback condition, and we use them to compare choice allocations averaged across subjects and individual choice sequences to highlight differences between tasks and intersubject differences. The most complex model, involving both choice and reward feedback, contains only four parameters, but nonetheless reveals significant differences in individual strategies. Strikingly, we find that rewards feedback can be either detrimental or advantageous to performance, depending upon the task
Proceedings of the 1996 Conference on Information Science and Systems : [CISS '96 held March 20,21, and 22, 1996 at Princeton
University, Princeton, NJ] by Conference on Information Science and Systems(
Book
)
1 edition published in 1996 in Undetermined and held by 1 WorldCat member library worldwide
1 edition published in 1996 in Undetermined and held by 1 WorldCat member library worldwide
Proceedings of the 1996 Conference on Information Science and Systems : [CISS '96 held March 20,21, and 22, 1996 at Princeton
University, Princeton, NJ] by Conference on Information Science and Systems(
Book
)
1 edition published in 1996 in Undetermined and held by 1 WorldCat member library worldwide
1 edition published in 1996 in Undetermined and held by 1 WorldCat member library worldwide
Extending and Unifying Formal Models for Machine Learning(
)
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
There has been a great deal of work on statistical pattern recognition, nonparametric estimation, and formal models of machine learning. Recent and classical work in these areas have provided fundamental results on the amount of data needed for classification, estimation, and prediction in a variety of nonparametric settings. The applicability of these paradigms is often limited by the assumptions on the data gathering mechanisms and the performance criteria. Our work has had two primary goals. The first is to investigate extensions and new models which give results useful in broader applications. The second goal is to apply these learning results to other areas such as signal/image processing, geometric reconstruction, and system identification. We have studied a variety of problems and have been able to relax assumptions required on the observed data as well as on the success criteria while still obtaining positive results. Our results have provided new insights into classical work and have also suggested a number of directions for further work
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
There has been a great deal of work on statistical pattern recognition, nonparametric estimation, and formal models of machine learning. Recent and classical work in these areas have provided fundamental results on the amount of data needed for classification, estimation, and prediction in a variety of nonparametric settings. The applicability of these paradigms is often limited by the assumptions on the data gathering mechanisms and the performance criteria. Our work has had two primary goals. The first is to investigate extensions and new models which give results useful in broader applications. The second goal is to apply these learning results to other areas such as signal/image processing, geometric reconstruction, and system identification. We have studied a variety of problems and have been able to relax assumptions required on the observed data as well as on the success criteria while still obtaining positive results. Our results have provided new insights into classical work and have also suggested a number of directions for further work
Can One Decide the Type of the Mean from the Empirical Measure?(
)
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
The problem of deciding whether the mean of an unknown distribution is in a set Alpha or in its complement based on a sequence of independent random variables drawn according to this distribution is considered. Using large deviations techniques, an algorithm is proposed which is shown to lead to an a.s. correct decision for a class of Alpha which are necessarily countable. A refined decision procedure is also presented which, given a countable decomposition of Alpha, can determine a.s. to which set of the decomposition the mean belongs. This extends and simplifies a construction by Cover
1 edition published in 1990 in English and held by 1 WorldCat member library worldwide
The problem of deciding whether the mean of an unknown distribution is in a set Alpha or in its complement based on a sequence of independent random variables drawn according to this distribution is considered. Using large deviations techniques, an algorithm is proposed which is shown to lead to an a.s. correct decision for a class of Alpha which are necessarily countable. A refined decision procedure is also presented which, given a countable decomposition of Alpha, can determine a.s. to which set of the decomposition the mean belongs. This extends and simplifies a construction by Cover
Workshop on Computing and Intelligent Systems(
)
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
This grant provided block international travel funds to help enable researchers to attend an international workshop on "Computing and Intelligent Systems" held in Bangalore, India from December 2023, 1993. The conference venue was the Indian Institute of Science, Bangalore, a Premier Research Institute in India. This conference was organized by the Indian Institute of Science in cooperation with the Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore. The occasion also commemorated the SILVER JUBILEE of the department of Computer Science & Automation. The Workshop was comprised of a series of invited presentations by frontline researchers in the areas of Computer Science and Intelligence Systems, and included talks by several Indian faculty and graduate students. One of the primary objectives of the workshop was to bring together leading researchers and distinguished alumni in these areas, and to discuss and exchange ideas on frontier topics of research. The conference attracted scientific workers in the area from all over the world thus provided a forum for global technical interaction. The requested funds were instrumental in guaranteeing the technical success of the meeting by enabling participation of a sufficient number of key U.S. researchers
1 edition published in 1994 in English and held by 1 WorldCat member library worldwide
This grant provided block international travel funds to help enable researchers to attend an international workshop on "Computing and Intelligent Systems" held in Bangalore, India from December 2023, 1993. The conference venue was the Indian Institute of Science, Bangalore, a Premier Research Institute in India. This conference was organized by the Indian Institute of Science in cooperation with the Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore. The occasion also commemorated the SILVER JUBILEE of the department of Computer Science & Automation. The Workshop was comprised of a series of invited presentations by frontline researchers in the areas of Computer Science and Intelligence Systems, and included talks by several Indian faculty and graduate students. One of the primary objectives of the workshop was to bring together leading researchers and distinguished alumni in these areas, and to discuss and exchange ideas on frontier topics of research. The conference attracted scientific workers in the area from all over the world thus provided a forum for global technical interaction. The requested funds were instrumental in guaranteeing the technical success of the meeting by enabling participation of a sufficient number of key U.S. researchers
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Center for Intelligent Control Systems (U.S.)
 Massachusetts Institute of Technology Laboratory for Information and Decision Systems
 Mitter, S. K. (Sanjoy K.) 1933
 Zeitouni, Ofer Author
 Shvaytser, Haim Author
 Tsitsiklis, John N.
 Princeton University Department of Electrical Engineering
 Willsky, Alan S.
 Richardson, Thomas J. (Thomas Joseph) 1961
 Lele, Avinash Shreedhar Author
Associated Subjects
Approximation theory Artificial intelligence Coding theory Computational complexity Computer vision Decomposition (Mathematics) Entropy (Information theory) Existence theorems Image processing Information theory in mathematics Lattice theory Learning Learning models (Stochastic processes) Machine learning Markov processes Model theory Probabilities Statistics Variational principles