컨텐츠로 이동
Statistical learning theory and stochastic optimization : Ecole d'Eté de Probabilités de Saint-Flour XXXI-2001 해당 항목을 미리보기
닫기해당 항목을 미리보기
확인중입니다…

Statistical learning theory and stochastic optimization : Ecole d'Eté de Probabilités de Saint-Flour XXXI-2001

저자: Olivier Catoni; Jean Picard; LINK (Online service)
출판사: Berlin : Springer-Verlag, ©2004.
시리즈: Lecture notes in mathematics (Springer-Verlag), 1851.
판/형식:   전자도서 : 문서 : 컨퍼런스 간행물 : 영어모든 판과 형식 보기
데이터베이스:WorldCat
요약:
Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes  더 읽기…
평가:

(아무런 평가가 없습니다.) 0 리뷰와 함께 - 첫번째로 올려주세요.

주제
다음과 같습니다:

 

온라인으로 문서 찾기

이 항목에 대한 링크

도서관에서 사본 찾기

&AllPage.SpinnerRetrieving; 해당항목을 보유하고 있는 도서관을 찾는 중

상세정보

장르/형태: Electronic books
Conference proceedings
Congresses
추가적인 물리적 형식: Print version:
Catoni, Olivier.
Statistical learning theory and stochastic optimization.
Berlin : Springer-Verlag, ©2004
(DLC) 2004109143
(OCoLC)56714791
자료 유형: 컨퍼런스 간행물, 문서, 인터넷 자료
문서 형식: 인터넷 자원, 컴퓨터 파일
모든 저자 / 참여자: Olivier Catoni; Jean Picard; LINK (Online service)
ISBN: 9783540445074 3540445072
OCLC 번호: 56508135
메모: " ... 31st Probability Summer School in Saint-Flour (July 8-25, 2001) ..."--Preface.
설명: 1 online resource (viii, 272 pages) : illustrations.
내용: Universal Lossless Data Compression --
Links Between Data Compression and Statistical Estimation --
Non Cumulated Mean Risk --
Gibbs Estimators --
Randomized Estimators and Empirical Complexity --
Deviation Inequalities --
Markov Chains with Exponential Transitions --
References --
Index.
일련 제목: Lecture notes in mathematics (Springer-Verlag), 1851.
다른 제목 Ecole d'Eté de Probabilités de Saint-Flour XXXI-2001
책임: Olivier Catoni ; editor, Jean Picard.
더 많은 정보:

초록:

e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the  더 읽기…

리뷰

편집자의 리뷰

출판사 줄거리

From the reviews: "This book is based on a course of lectures given by the author on a circle of ideas lying at the interface of information theory, statistical learning theory and statistical 더 읽기…

 
사용자-기여 리뷰
GoodReads 리뷰 가져오는 중…
DOGObooks 리뷰를 가지고 오는 중…

태그

첫번째 되기
요청하신 것을 확인하기

이 항목을 이미 요청하셨을 수도 있습니다. 만약 이 요청을 계속해서 진행하시려면 Ok을 선택하세요.

링크된 데이터


<http://www.worldcat.org/oclc/56508135>
library:oclcnum"56508135"
library:placeOfPublication
library:placeOfPublication
owl:sameAs<info:oclcnum/56508135>
rdf:typeschema:Book
schema:about
schema:about
schema:about
schema:about
schema:about
<http://id.worldcat.org/fast/1012127>
rdf:typeschema:Intangible
schema:name"Mathematical statistics"@en
schema:name"Mathematical statistics."@en
schema:about
schema:about
schema:bookFormatschema:EBook
schema:contributor
schema:contributor
schema:contributor
schema:copyrightYear"2004"
schema:creator
schema:datePublished"2004"
schema:description"Universal Lossless Data Compression -- Links Between Data Compression and Statistical Estimation -- Non Cumulated Mean Risk -- Gibbs Estimators -- Randomized Estimators and Empirical Complexity -- Deviation Inequalities -- Markov Chains with Exponential Transitions -- References -- Index."@en
schema:description"Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results."@en
schema:exampleOfWork<http://worldcat.org/entity/work/id/14473764>
schema:genre"Conference proceedings"@en
schema:genre"Conference proceedings."@en
schema:genre"Electronic books."@en
schema:inLanguage"en"
schema:name"Ecole d'Eté de Probabilités de Saint-Flour XXXI-2001"@en
schema:name"Statistical learning theory and stochastic optimization Ecole d'Eté de Probabilités de Saint-Flour XXXI-2001"@en
schema:publisher
schema:url<http://dx.doi.org/10.1007/b99352>
schema:url<http://rave.ohiolink.edu/ebooks/ebc/11305972>
schema:url<http://www.springerlink.com/openurl.asp?genre=issue&issn=0075-8434&volume=1851>
schema:url
schema:url<http://springerlink.metapress.com/link.asp?id=MT30WFT73522>
schema:workExample

Content-negotiable representations

윈도우 닫기

WorldCat에 로그인 하십시오 

계정이 없으세요? 아주 간단한 절차를 통하여 무료 계정을 만드실 수 있습니다.