Catoni, Olivier
Overview
Works:  23 works in 48 publications in 2 languages and 528 library holdings 

Genres:  Conference proceedings 
Roles:  Author, Opponent, Editor, Thesis advisor 
Classifications:  QA273, 519.542 
Publication Timeline
.
Most widely held works by
Olivier Catoni
Statistical learning theory and stochastic optimization : Ecole d'Eté de Probabilités de SaintFlour XXXI2001 by
Olivier Catoni(
Book
)
18 editions published in 2004 in English and held by 279 WorldCat member libraries worldwide
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. oversimplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PACBayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on nonasymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results
18 editions published in 2004 in English and held by 279 WorldCat member libraries worldwide
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. oversimplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PACBayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on nonasymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results
PACBayesian supervised classification : the thermodynamics of statistical learning by
Olivier Catoni(
Book
)
8 editions published between 2007 and 2008 in English and held by 64 WorldCat member libraries worldwide
This ebook is the product of Project Euclid and its mission to advance scholarly communication in the field of theoretical and applied mathematics and statistics. Project Euclid was developed and deployed by the Cornell University Library and is jointly managed by Cornell and the Duke University Press
8 editions published between 2007 and 2008 in English and held by 64 WorldCat member libraries worldwide
This ebook is the product of Project Euclid and its mission to advance scholarly communication in the field of theoretical and applied mathematics and statistics. Project Euclid was developed and deployed by the Cornell University Library and is jointly managed by Cornell and the Duke University Press
ETUDE ASYMPTOTIQUE DES ALGORITHMES DE RECUIT SIMULE by
Olivier Catoni(
Book
)
2 editions published in 1990 in French and held by 4 WorldCat member libraries worldwide
LES ALGORITHMES DE RECUIT SIMULE SONT UNE METHODE D'OPTIMISATION APPROCHEE OU L'ESPACE DES ETATS EST EXPLORE PAR UNE CHAINE DE MARKOV INHOMOGENE. NOUS ETUDIONS LA DYNAMIQUE DE METROPOLIS SUR UN ESPACE FINI PAR DES METHODES DE GRANDES DEVIATIONS. UNE DECOMPOSITION DE L'ESPACE EN CYCLES, SUIVANT WENTZELL ET FREIDLIN, CONDUIT A ESTIMER LA LOI DU TEMPS ET DU POINT D'ENTREE DANS UN ENSEMBLE POUR LES TRAJECTOIRES DEMEUREES DANS UN AUTRE ENSEMBLE DONNE. LA PREUVE, PAR RECURRENCE, ETABLIT QUE LES ESTIMATIONS SE CONSERVENT PAR COMPOSITION DES LOIS PAR PRODUIT TENSORIEL INTERIEUR. SUIVENT DES APPLICATIONS: UN COMPLEMENT AU THEOREME DE HAJEK SUR LES CONDITIONS NECESSAIRES ET SUFFISANTES DE CONVERGENCE, UNE BORNE SUPERIEURE POUR LA VITESSE DE CONVERGENCE, DES ESTIMEES DE LA LOI DU SYSTEME POUR DIFFERENTES SUITES DE TEMPERATURES. NOUS MONTRONS QUE LA SUITE DE TEMPERATURES OPTIMALE EN HORIZON FINI LOINTAIN DECROIT COMME L'INVERSE DU LOGARITHME DANS SA PREMIERE PARTIE, MAIS EN GENERAL PAS DANS SA PARTIE PROCHE DE L'HORIZON. ON OBTIENT PAR CONTRE UN TAUX DE CONVERGENCE ASYMPTOTIQUEMENT OPTIMAL AU SENS DES EQUIVALENTS LOGARITHMIQUES AVEC DES SUITES DE TEMPERATURES GEOMETRIQUES DONT LE TAUX EST CONVENABLEMENT ADAPTE A L'HORIZON
2 editions published in 1990 in French and held by 4 WorldCat member libraries worldwide
LES ALGORITHMES DE RECUIT SIMULE SONT UNE METHODE D'OPTIMISATION APPROCHEE OU L'ESPACE DES ETATS EST EXPLORE PAR UNE CHAINE DE MARKOV INHOMOGENE. NOUS ETUDIONS LA DYNAMIQUE DE METROPOLIS SUR UN ESPACE FINI PAR DES METHODES DE GRANDES DEVIATIONS. UNE DECOMPOSITION DE L'ESPACE EN CYCLES, SUIVANT WENTZELL ET FREIDLIN, CONDUIT A ESTIMER LA LOI DU TEMPS ET DU POINT D'ENTREE DANS UN ENSEMBLE POUR LES TRAJECTOIRES DEMEUREES DANS UN AUTRE ENSEMBLE DONNE. LA PREUVE, PAR RECURRENCE, ETABLIT QUE LES ESTIMATIONS SE CONSERVENT PAR COMPOSITION DES LOIS PAR PRODUIT TENSORIEL INTERIEUR. SUIVENT DES APPLICATIONS: UN COMPLEMENT AU THEOREME DE HAJEK SUR LES CONDITIONS NECESSAIRES ET SUFFISANTES DE CONVERGENCE, UNE BORNE SUPERIEURE POUR LA VITESSE DE CONVERGENCE, DES ESTIMEES DE LA LOI DU SYSTEME POUR DIFFERENTES SUITES DE TEMPERATURES. NOUS MONTRONS QUE LA SUITE DE TEMPERATURES OPTIMALE EN HORIZON FINI LOINTAIN DECROIT COMME L'INVERSE DU LOGARITHME DANS SA PREMIERE PARTIE, MAIS EN GENERAL PAS DANS SA PARTIE PROCHE DE L'HORIZON. ON OBTIENT PAR CONTRE UN TAUX DE CONVERGENCE ASYMPTOTIQUEMENT OPTIMAL AU SENS DES EQUIVALENTS LOGARITHMIQUES AVEC DES SUITES DE TEMPERATURES GEOMETRIQUES DONT LE TAUX EST CONVENABLEMENT ADAPTE A L'HORIZON
Méthodes statistiques pour la modélisation du langage naturel by
JeanPhilippe Vert(
Book
)
1 edition published in 2001 in English and held by 3 WorldCat member libraries worldwide
1 edition published in 2001 in English and held by 3 WorldCat member libraries worldwide
Méthodes d'accélération pour les chaînes de Markov à transitions exponentielles by Cecile Cot(
Book
)
1 edition published in 1998 in French and held by 3 WorldCat member libraries worldwide
CETTE THESE PRESENTE QUATRE TYPES D'ACCELERATION POUR LES ALGORITHMES D'OPTIMISATION STOCHASTIQUES CLASSIQUES TELS QUE LES DYNAMIQUES DE METROPOLIS OU DE GLAUBER. TOUTES SONT BASEES SUR L'IDEE DE REDUIRE LA TAILLE DE L'ESPACE DES ETATS ACTIFS DE CHAQUE PHASE DE L'ALGORITHME. DANS LA PREMIERE PARTIE, ON S'INTERESSE AU RECUIT SIMULE AVEC DES SCHEMAS DE TEMPERATURE TRIANGULAIRES GEOMETRIQUES DECROISSANTS, CONSTANTS PAR PALIERS. ON MONTRE QUE DE TELS SCHEMAS PERMETTENT D'OBTENIR L'EXPOSANT OPTIMAL POUR LA VITESSE DE CONVERGENCE. DANS LA SECONDE PARTIE, ON CONSTRUIT UN ALGORITHME MARKOVIEN POTENTIELLEMENT PARALLELE, APPELE ALGORITHME MULTIRESOLUTION. IL UTILISE DES TENTATIVES LOCALES ET MULTIPLES DE L'ALGORITHME DE METROPOLIS. EN JOUANT SUR LE NOMBRE DE TENTATIVES (RESP. LA TAILLE DES FENETRES DE LOCALISATION), ON PEUT ABAISSER LA BARRIERE DE POTENTIEL LOCALE (RESP. GLOBALE). DANS LA TROISIEME PARTIE, ON SE PLACE SUR UN ESPACE HIERARCHISABLE EN PLUSIEURS NIVEAUX DE RESOLUTION. ON DEFINIT UN ALGORITHME MARKOVIEN, APPELE ALGORITHME HIERARCHIQUE, POUR LEQUEL CHAQUE RECHERCHE BASSE RESOLUTION EST VALIDEE PAR UNE OPTIMISATION A HAUTE RESOLUTION. CET ALGORITHME ECHANTILLONNE PARTIELLEMENT LA LOI DE GIBBS. C'EST UN PRETRAITEMENT DE L'ALGORITHME DE METROPOLIS STANDARD. ON CALCULE UN GAIN DE TEMPS EFFECTIF SUR LE PROBLEME DE METASTABILITE DU MODELE D'ISING. LA QUATRIEME PARTIE, EXPERIMENTALE, ACCOMPAGNE L'ETUDE THEORIQUE FAITE SUR LES ALGORITHMES PRECEDENTS. ON ILLUSTRE LEURS PERFORMANCES POUR LA RESTAURATION DE SEQUENCES VIDEO CORROMPUES PAR UN BRUIT GAUSSIEN. ON PROPOSE UN MODELE DE RECONSTRUCTION DICHOTOMIQUE DES LIGNES DE NIVEAUX INTRODUIT PAR O. CATONI, PRENANT EN COMPTE UN BRUIT GAUSSIEN ET LES CORRELATIONS TEMPORELLES
1 edition published in 1998 in French and held by 3 WorldCat member libraries worldwide
CETTE THESE PRESENTE QUATRE TYPES D'ACCELERATION POUR LES ALGORITHMES D'OPTIMISATION STOCHASTIQUES CLASSIQUES TELS QUE LES DYNAMIQUES DE METROPOLIS OU DE GLAUBER. TOUTES SONT BASEES SUR L'IDEE DE REDUIRE LA TAILLE DE L'ESPACE DES ETATS ACTIFS DE CHAQUE PHASE DE L'ALGORITHME. DANS LA PREMIERE PARTIE, ON S'INTERESSE AU RECUIT SIMULE AVEC DES SCHEMAS DE TEMPERATURE TRIANGULAIRES GEOMETRIQUES DECROISSANTS, CONSTANTS PAR PALIERS. ON MONTRE QUE DE TELS SCHEMAS PERMETTENT D'OBTENIR L'EXPOSANT OPTIMAL POUR LA VITESSE DE CONVERGENCE. DANS LA SECONDE PARTIE, ON CONSTRUIT UN ALGORITHME MARKOVIEN POTENTIELLEMENT PARALLELE, APPELE ALGORITHME MULTIRESOLUTION. IL UTILISE DES TENTATIVES LOCALES ET MULTIPLES DE L'ALGORITHME DE METROPOLIS. EN JOUANT SUR LE NOMBRE DE TENTATIVES (RESP. LA TAILLE DES FENETRES DE LOCALISATION), ON PEUT ABAISSER LA BARRIERE DE POTENTIEL LOCALE (RESP. GLOBALE). DANS LA TROISIEME PARTIE, ON SE PLACE SUR UN ESPACE HIERARCHISABLE EN PLUSIEURS NIVEAUX DE RESOLUTION. ON DEFINIT UN ALGORITHME MARKOVIEN, APPELE ALGORITHME HIERARCHIQUE, POUR LEQUEL CHAQUE RECHERCHE BASSE RESOLUTION EST VALIDEE PAR UNE OPTIMISATION A HAUTE RESOLUTION. CET ALGORITHME ECHANTILLONNE PARTIELLEMENT LA LOI DE GIBBS. C'EST UN PRETRAITEMENT DE L'ALGORITHME DE METROPOLIS STANDARD. ON CALCULE UN GAIN DE TEMPS EFFECTIF SUR LE PROBLEME DE METASTABILITE DU MODELE D'ISING. LA QUATRIEME PARTIE, EXPERIMENTALE, ACCOMPAGNE L'ETUDE THEORIQUE FAITE SUR LES ALGORITHMES PRECEDENTS. ON ILLUSTRE LEURS PERFORMANCES POUR LA RESTAURATION DE SEQUENCES VIDEO CORROMPUES PAR UN BRUIT GAUSSIEN. ON PROPOSE UN MODELE DE RECONSTRUCTION DICHOTOMIQUE DES LIGNES DE NIVEAUX INTRODUIT PAR O. CATONI, PRENANT EN COMPTE UN BRUIT GAUSSIEN ET LES CORRELATIONS TEMPORELLES
Transductive and inductive adaptative inference for regression and density estimation by
Pierre Alquier(
Book
)
1 edition published in 2006 in English and held by 2 WorldCat member libraries worldwide
Inférence Adaptative, Inductive et Transductive, pour l'Estimation de la Régression et de la Densité (Pierre Alquier) Cette thèse a pour objet l'étude des propriétés statistiques de certains algorithmes d'apprentissage dans le cas de l'estimation de la régression et de la densité. Elle est divisée en trois parties. La première partie consiste en une généralisation des théorèmes PACBayésiens, sur la classification, d'Olivier Catoni, au cas de la régression avec une fonction de perte générale. Dans la seconde partie, on étudie plus particulièrement le cas de la régression aux moindres carrés et on propose un nouvel algorithme de sélection de variables. Cette méthode peut être appliquée notamment au cas d'une base de fonctions orthonormales, et conduit alors à des vitesses de convergence optimales, mais aussi au cas de fonctions de type noyau, elle conduit alors à une variante des méthodes dites "machines à vecteurs supports" (SVM). La troisième partie étend les résultats de la seconde au cas de l'estimation de densité avec perte quadratique
1 edition published in 2006 in English and held by 2 WorldCat member libraries worldwide
Inférence Adaptative, Inductive et Transductive, pour l'Estimation de la Régression et de la Densité (Pierre Alquier) Cette thèse a pour objet l'étude des propriétés statistiques de certains algorithmes d'apprentissage dans le cas de l'estimation de la régression et de la densité. Elle est divisée en trois parties. La première partie consiste en une généralisation des théorèmes PACBayésiens, sur la classification, d'Olivier Catoni, au cas de la régression avec une fonction de perte générale. Dans la seconde partie, on étudie plus particulièrement le cas de la régression aux moindres carrés et on propose un nouvel algorithme de sélection de variables. Cette méthode peut être appliquée notamment au cas d'une base de fonctions orthonormales, et conduit alors à des vitesses de convergence optimales, mais aussi au cas de fonctions de type noyau, elle conduit alors à une variante des méthodes dites "machines à vecteurs supports" (SVM). La troisième partie étend les résultats de la seconde au cas de l'estimation de densité avec perte quadratique
Théorie statistique de l'apprentissage une approche PACBayésienne by
JeanYves Audibert(
Book
)
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
Statistical Learning Theory and Stochastic Optimization : Ecole dEté de Probabilités de SaintFlour XXXI  2001 00(
)
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2004 in English and held by 2 WorldCat member libraries worldwide
Exposé de synthèse en vue de la soutenance d'une habilitation à diriger des recherches : spécialité : Mathématiques by
Olivier Catoni(
Book
)
1 edition published in 1997 in French and held by 2 WorldCat member libraries worldwide
1 edition published in 1997 in French and held by 2 WorldCat member libraries worldwide
Numéro special en l'honneur de J. Bretagnolle, D. DacunhaCastelle, I. Ibragimov(
Book
)
1 edition published in 2002 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 2002 in English and held by 2 WorldCat member libraries worldwide
Image restoration by stochastic dichotomic reconstruction of contour lines by
Olivier Catoni(
)
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
1 edition published in 1992 in English and held by 1 WorldCat member library worldwide
Statistical learning theory and stochastic optimization : Ecole d'été de probabilités de SaintFlour by
Olivier Catoni(
Book
)
1 edition published in 2004 in English and held by 1 WorldCat member library worldwide
1 edition published in 2004 in English and held by 1 WorldCat member library worldwide
Metropolis, simulated annealing and I.E.T. algorithms : theory and experiments by
Olivier Catoni(
Book
)
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
Piecewise constant triangular cooling schedules for generalized simulated annealing algorithms by Cécile Cot(
Book
)
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
Processus de substitution markoviens : un modèle statistique pour la linguistique by
Thomas Mainguy(
)
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
This thesis proposes a new approach to natural language processing. Rather than trying to estimate directly the probability distribution of a random sentence, we will detect syntactic structures in the language, which can be used to modify and create new sentences from an initial sample.The study of syntactic structures will be done using Markov substitute sets, sets of strings that can be freely substituted in any sentence without affecting the whole distribution. These sets define the notion of Markov substitute processes, modelling conditional independence of certain substrings (given by the sets) with respect to their context. This point of view splits the issue of language analysis into two parts, a model selection stage where Markov substitute sets are selected, and a parameter estimation stage where the actual frequencies for each set are estimated.We show that these substitute processes form exponential families of distributions, when the language structure (the Markov substitute sets) is fixed. On the other hand, when the language structure is unknown, we propose methods to identify Markov substitute sets from a statistical sample, and to estimate the parameters of the distribution. Markov substitute sets show some connections with contextFree grammars, that can be used to help the analysis. We then proceed to build invariant dynamics for Markov substitute processes. They can among other things be used to effectively compute the maximum likelihood estimate. Indeed, Markov substitute models can be seen as the thermodynamical limit of the invariant measure of crossingOver dynamics
1 edition published in 2014 in English and held by 1 WorldCat member library worldwide
This thesis proposes a new approach to natural language processing. Rather than trying to estimate directly the probability distribution of a random sentence, we will detect syntactic structures in the language, which can be used to modify and create new sentences from an initial sample.The study of syntactic structures will be done using Markov substitute sets, sets of strings that can be freely substituted in any sentence without affecting the whole distribution. These sets define the notion of Markov substitute processes, modelling conditional independence of certain substrings (given by the sets) with respect to their context. This point of view splits the issue of language analysis into two parts, a model selection stage where Markov substitute sets are selected, and a parameter estimation stage where the actual frequencies for each set are estimated.We show that these substitute processes form exponential families of distributions, when the language structure (the Markov substitute sets) is fixed. On the other hand, when the language structure is unknown, we propose methods to identify Markov substitute sets from a statistical sample, and to estimate the parameters of the distribution. Markov substitute sets show some connections with contextFree grammars, that can be used to help the analysis. We then proceed to build invariant dynamics for Markov substitute processes. They can among other things be used to effectively compute the maximum likelihood estimate. Indeed, Markov substitute models can be seen as the thermodynamical limit of the invariant measure of crossingOver dynamics
Estimation par tests by
Mathieu Sart(
)
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
This thesis deals with the estimation of functions from tests in three statistical settings. We begin by studying the problem of estimating the intensities of Poisson processes with covariates. We prove a general model selection theorem from which we derive nonasymptotic risk bounds under various assumptions on the target function. We then propose two procedures to estimate the transition density of an homogeneous Markov chain. The first one selects an estimator among a collection of piecewise constant estimators. The selected estimator is shown to satisfy an oracletype inequality under minimal assumptions on the Markov chain which allows us to deduce uniform rates of convergence over balls of inhomogeneous Besov spaces. Besides, the estimator is adaptive with respect to the smoothness of the transition density. We also evaluate the performance of the estimator in practice by carrying out numerical simulations. The second procedure is only of theoretical interest but yields a general model selection theorem from which we derive rates of convergence under more general assumptions on the transition density. Finally, we propose a new parametric estimator of a density. We upperbound its risk under assumptions for which the maximum likelihood method may not work. The simulations show that these two estimators are very close when the model is true and regular enough. However, contrary to the maximum likelihood estimator, this estimator is robust
1 edition published in 2013 in French and held by 1 WorldCat member library worldwide
This thesis deals with the estimation of functions from tests in three statistical settings. We begin by studying the problem of estimating the intensities of Poisson processes with covariates. We prove a general model selection theorem from which we derive nonasymptotic risk bounds under various assumptions on the target function. We then propose two procedures to estimate the transition density of an homogeneous Markov chain. The first one selects an estimator among a collection of piecewise constant estimators. The selected estimator is shown to satisfy an oracletype inequality under minimal assumptions on the Markov chain which allows us to deduce uniform rates of convergence over balls of inhomogeneous Besov spaces. Besides, the estimator is adaptive with respect to the smoothness of the transition density. We also evaluate the performance of the estimator in practice by carrying out numerical simulations. The second procedure is only of theoretical interest but yields a general model selection theorem from which we derive rates of convergence under more general assumptions on the transition density. Finally, we propose a new parametric estimator of a density. We upperbound its risk under assumptions for which the maximum likelihood method may not work. The simulations show that these two estimators are very close when the model is true and regular enough. However, contrary to the maximum likelihood estimator, this estimator is robust
Gibbs estimators by
Olivier Catoni(
Book
)
1 edition published in 1998 in English and held by 1 WorldCat member library worldwide
1 edition published in 1998 in English and held by 1 WorldCat member library worldwide
A mixture approach to universal model selection by
Olivier Catoni(
Book
)
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
1 edition published in 1997 in English and held by 1 WorldCat member library worldwide
Solving scheduling problems by simulated annealing by
Olivier Catoni(
Book
)
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
1 edition published in 1996 in English and held by 1 WorldCat member library worldwide
Statistical Learning Theory and Stochastic Optimization Ecole dEt de Probabilit s de SaintFlour XXXI  2001 00(
)
1 edition published in 2004 in English and held by 0 WorldCat member libraries worldwide
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. oversimplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PACBayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on nonasymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results
1 edition published in 2004 in English and held by 0 WorldCat member libraries worldwide
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. oversimplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PACBayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on nonasymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities