Härdle, Wolfgang
Overview
Works:  501 works in 1,413 publications in 3 languages and 16,758 library holdings 

Genres:  Conference papers and proceedings Handbooks and manuals Examinations Charts, diagrams, etc 
Roles:  Author, Editor, Other, Contributor, Creator, dgs, Redactor 
Publication Timeline
.
Most widely held works by
Wolfgang Härdle
Applied nonparametric regression by
Wolfgang Härdle(
Book
)
51 editions published between 1990 and 2002 in English and Undetermined and held by 752 WorldCat member libraries worldwide
Applied Nonparametric Regression is the first book to bring together in one place the techniques for regression curve smoothing involving more than one variable. The computer and the development of interactive graphics programs have made curve estimation possible. This volume focuses on the applications and practical problems of two central aspects of curve smoothing: the choice of smoothing parameters and the construction of confidence bounds. Härdle argues that all smoothing methods are based on a local averaging mechanism and can be seen as essentially equivalent to kernel smoothing. To simplify the exposition, kernel smoothers are introduced and discussed in great detail. Building on this exposition, various other smoothing methods (among them splines and orthogonal polynomials) are presented and their merits discussed. All the methods presented can be understood on an intuitive level; however, exercises and supplemental materials are provided for those readers desiring a deeper understanding of the techniques. The methods covered in this text have numerous applications in many areas using statistical analysis. Examples are drawn from economics as well as from other disciplines including medicine and engineering
51 editions published between 1990 and 2002 in English and Undetermined and held by 752 WorldCat member libraries worldwide
Applied Nonparametric Regression is the first book to bring together in one place the techniques for regression curve smoothing involving more than one variable. The computer and the development of interactive graphics programs have made curve estimation possible. This volume focuses on the applications and practical problems of two central aspects of curve smoothing: the choice of smoothing parameters and the construction of confidence bounds. Härdle argues that all smoothing methods are based on a local averaging mechanism and can be seen as essentially equivalent to kernel smoothing. To simplify the exposition, kernel smoothers are introduced and discussed in great detail. Building on this exposition, various other smoothing methods (among them splines and orthogonal polynomials) are presented and their merits discussed. All the methods presented can be understood on an intuitive level; however, exercises and supplemental materials are provided for those readers desiring a deeper understanding of the techniques. The methods covered in this text have numerous applications in many areas using statistical analysis. Examples are drawn from economics as well as from other disciplines including medicine and engineering
Applied multivariate statistical analysis by
Wolfgang Härdle(
Book
)
67 editions published between 2003 and 2015 in 4 languages and held by 598 WorldCat member libraries worldwide
"A state of the art presentation of the tools and concepts of multivariate data analysis with a strong focus on applications. The first part is devoted to graphical techniques describing the distributions of the involved variables. The second part deals with multivariate random variables and presents distributions, estimators and tests for various practical situations. The last part covers multivariate techniques and introduces the reader into the wide variety of tools for multivariate data analysis."Jacket
67 editions published between 2003 and 2015 in 4 languages and held by 598 WorldCat member libraries worldwide
"A state of the art presentation of the tools and concepts of multivariate data analysis with a strong focus on applications. The first part is devoted to graphical techniques describing the distributions of the involved variables. The second part deals with multivariate random variables and presents distributions, estimators and tests for various practical situations. The last part covers multivariate techniques and introduces the reader into the wide variety of tools for multivariate data analysis."Jacket
Statistics of financial markets : an introduction by
Jürgen Franke(
Book
)
55 editions published between 2004 and 2015 in English and held by 513 WorldCat member libraries worldwide
"Statistics of Financial Markets presents in a vivid yet concise style the necessary statistical and mathematical background for Financial Engineers and introduces to the main ideas in mathematical finance and financial statistics. Topics covered are, among others, option valuation, financial time series analysis, valueatrisk, copulas, and statistics of the extremes."BOOK JACKET
55 editions published between 2004 and 2015 in English and held by 513 WorldCat member libraries worldwide
"Statistics of Financial Markets presents in a vivid yet concise style the necessary statistical and mathematical background for Financial Engineers and introduces to the main ideas in mathematical finance and financial statistics. Topics covered are, among others, option valuation, financial time series analysis, valueatrisk, copulas, and statistics of the extremes."BOOK JACKET
Applied quantitative finance : theory and computational tools by
Wolfgang Härdle(
Book
)
39 editions published between 2002 and 2017 in English and German and held by 383 WorldCat member libraries worldwide
This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, stateoftheart treatment of cuttingedge methods and topics, such as collateralized debt obligations, the highfrequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchainbased currencies, have become popular b ut are theoretically challenging when based on conventional methods. Among others, it introduces a modern textmining method called dynamic topic modeling in detail and applies it to the message board of Bitcoins. The unique synthesis of theory and practice supported by computational tools is reflected not only in the selection of topics, but also in the fine balance of scientific contributions on practical implementation and theoretical concepts. This link between theory and practice offers theoreticians insights into considerations of applicability and, vice versa, provides practitioners convenient access to new techniques in quantitative finance. Hence the book will appeal both to researchers, including master and PhD students, and practitioners, such as financial engineers. The results presented in the book are fully reproducible and all quantlets needed for calculations are provided on an accompanying website. The Quantlet platform quantlet.de, quantlet.c om, quantlet.org is an integrated QuantNet environment consisting of different types of statisticsrelated documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding DataDriven Documentsbased visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book
39 editions published between 2002 and 2017 in English and German and held by 383 WorldCat member libraries worldwide
This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, stateoftheart treatment of cuttingedge methods and topics, such as collateralized debt obligations, the highfrequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchainbased currencies, have become popular b ut are theoretically challenging when based on conventional methods. Among others, it introduces a modern textmining method called dynamic topic modeling in detail and applies it to the message board of Bitcoins. The unique synthesis of theory and practice supported by computational tools is reflected not only in the selection of topics, but also in the fine balance of scientific contributions on practical implementation and theoretical concepts. This link between theory and practice offers theoreticians insights into considerations of applicability and, vice versa, provides practitioners convenient access to new techniques in quantitative finance. Hence the book will appeal both to researchers, including master and PhD students, and practitioners, such as financial engineers. The results presented in the book are fully reproducible and all quantlets needed for calculations are provided on an accompanying website. The Quantlet platform quantlet.de, quantlet.c om, quantlet.org is an integrated QuantNet environment consisting of different types of statisticsrelated documents and program codes. Its goal is to promote reproducibility and offer a platform for sharing validated knowledge native to the social web. QuantNet and the corresponding DataDriven Documentsbased visualization allows readers to reproduce the tables, pictures and calculations inside this Springer book
Smoothing techniques : with implementation in S by
Wolfgang Härdle(
Book
)
14 editions published between 1990 and 1991 in English and held by 341 WorldCat member libraries worldwide
The author has attempted to present a book that provides a nontechnical introduction into the area of nonparametric density and regression function estimation. The application of these methods is discussed in terms of the S computing environment. Smoothing in high dimensions faces the problem of data sparseness. A principal feature of smoothing, the averaging of data points in a prescribed neighborhood, is not really practicable in dimensions greater than three if we have just one hundred data points. Additive models provide a way out of this dilemma; but, for their interactiveness and recursiveness, they require highly effective algorithms. For this purpose, the method of WARPing (Weighted Averaging using Rounded Points) is described in great detail
14 editions published between 1990 and 1991 in English and held by 341 WorldCat member libraries worldwide
The author has attempted to present a book that provides a nontechnical introduction into the area of nonparametric density and regression function estimation. The application of these methods is discussed in terms of the S computing environment. Smoothing in high dimensions faces the problem of data sparseness. A principal feature of smoothing, the averaging of data points in a prescribed neighborhood, is not really practicable in dimensions greater than three if we have just one hundred data points. Additive models provide a way out of this dilemma; but, for their interactiveness and recursiveness, they require highly effective algorithms. For this purpose, the method of WARPing (Weighted Averaging using Rounded Points) is described in great detail
Wavelets, approximation, and statistical applications by
Wolfgang Härdle(
Book
)
16 editions published in 1998 in English and held by 337 WorldCat member libraries worldwide
The mathematical theory of wavelets was developed by Yes Meyer and many collaborators about ten years ago. It was designed for approximation of possibly irregular functions and surfaces and was successfully applied in data compression, turbulence analysis, and image and signal processing. Five years ago wavelet theory progressively appeared to be a powerful framework for nonparametric statistical problems. Efficient computation implementations are beginning to surface in the nineties. This book brings toghether these three streams of wavelet theory and introduces the novice in this field to these aspects. Readers interested in the theory and construction of wavelets will find in a condensed form results that are scattered in the research literature. A practitioner will be able to use wavelets via the available software code
16 editions published in 1998 in English and held by 337 WorldCat member libraries worldwide
The mathematical theory of wavelets was developed by Yes Meyer and many collaborators about ten years ago. It was designed for approximation of possibly irregular functions and surfaces and was successfully applied in data compression, turbulence analysis, and image and signal processing. Five years ago wavelet theory progressively appeared to be a powerful framework for nonparametric statistical problems. Efficient computation implementations are beginning to surface in the nineties. This book brings toghether these three streams of wavelet theory and introduces the novice in this field to these aspects. Readers interested in the theory and construction of wavelets will find in a condensed form results that are scattered in the research literature. A practitioner will be able to use wavelets via the available software code
Nonparametric and semiparametric models by
Wolfgang Härdle(
Book
)
12 editions published between 2004 and 2013 in English and held by 296 WorldCat member libraries worldwide
The concept of nonparametric smoothing is a central idea in statistics that aims to simultaneously estimate and modes the underlyingstructure. The book considers high dimensional objects, as density functions and regression. The semiparametric modeling technique compromises the two aims, flexibility and simplicity of statistical procedures, by introducing partial parametric components. These components allow to match structural conditions like e.g. linearity in some variables and may be used to model the influence of discrete variables. The aim of this monograph is to present the statistical and mathematical principles of smoothing with a focus on applicable techniques. The necessary mathematical treatment is easily understandable and a wide variety of interactive smoothing examples are given. The book does naturally split into two parts: Nonparametric models (histogram, kernel density estimation, nonparametric regression) and semiparametric models (generalized regression, single index models, generalized partial linear models, additive and generalized additive models). The first part is intended for undergraduate students majoring in mathematics, statistics, econometrics or biometrics whereas the second part is intended to be used by master and PhD students or researchers. The material is easy to accomplish since the ebook character of the text gives a maximum of flexibility in learning (and teaching) intensity
12 editions published between 2004 and 2013 in English and held by 296 WorldCat member libraries worldwide
The concept of nonparametric smoothing is a central idea in statistics that aims to simultaneously estimate and modes the underlyingstructure. The book considers high dimensional objects, as density functions and regression. The semiparametric modeling technique compromises the two aims, flexibility and simplicity of statistical procedures, by introducing partial parametric components. These components allow to match structural conditions like e.g. linearity in some variables and may be used to model the influence of discrete variables. The aim of this monograph is to present the statistical and mathematical principles of smoothing with a focus on applicable techniques. The necessary mathematical treatment is easily understandable and a wide variety of interactive smoothing examples are given. The book does naturally split into two parts: Nonparametric models (histogram, kernel density estimation, nonparametric regression) and semiparametric models (generalized regression, single index models, generalized partial linear models, additive and generalized additive models). The first part is intended for undergraduate students majoring in mathematics, statistics, econometrics or biometrics whereas the second part is intended to be used by master and PhD students or researchers. The material is easy to accomplish since the ebook character of the text gives a maximum of flexibility in learning (and teaching) intensity
Robust and nonlinear time series analysis : proceedings of a workshop organized by the Sonderforschungsbereich 123 "Stochastische
Mathematische Modelle", Heidelberg 1983 by
Jürgen Franke(
Book
)
18 editions published in 1984 in English and Undetermined and held by 285 WorldCat member libraries worldwide
Classical time series methods are based on the assumption that a particular stochastic process model generates the observed data. The, most commonly used assumption is that the data is a realization of a stationary Gaussian process. However, since the Gaussian assumption is a fairly stringent one, this assumption is frequently replaced by the weaker assumption that the process is wide~sense stationary and that only the mean and covariance sequence is specified. This approach of specifying the probabilistic behavior only up to "second order" has of course been extremely popular from a theoretical point of view be cause it has allowed one to treat a large variety of problems, such as prediction, filtering and smoothing, using the geometry of Hilbert spaces. While the literature abounds with a variety of optimal estimation results based on either the Gaussian assumption or the specification of secondorder properties, time series workers have not always believed in the literal truth of either the Gaussian or secondorder specifica tion. They have nonetheless stressed the importance of such optimali ty results, probably for two main reasons: First, the results come from a rich and very workable theory. Second, the researchers often relied on a vague belief in a kind of continuity principle according to which the results of time series inference would change only a small amount if the actual model deviated only a small amount from the assum ed model
18 editions published in 1984 in English and Undetermined and held by 285 WorldCat member libraries worldwide
Classical time series methods are based on the assumption that a particular stochastic process model generates the observed data. The, most commonly used assumption is that the data is a realization of a stationary Gaussian process. However, since the Gaussian assumption is a fairly stringent one, this assumption is frequently replaced by the weaker assumption that the process is wide~sense stationary and that only the mean and covariance sequence is specified. This approach of specifying the probabilistic behavior only up to "second order" has of course been extremely popular from a theoretical point of view be cause it has allowed one to treat a large variety of problems, such as prediction, filtering and smoothing, using the geometry of Hilbert spaces. While the literature abounds with a variety of optimal estimation results based on either the Gaussian assumption or the specification of secondorder properties, time series workers have not always believed in the literal truth of either the Gaussian or secondorder specifica tion. They have nonetheless stressed the importance of such optimali ty results, probably for two main reasons: First, the results come from a rich and very workable theory. Second, the researchers often relied on a vague belief in a kind of continuity principle according to which the results of time series inference would change only a small amount if the actual model deviated only a small amount from the assum ed model
Statistical tools for finance and insurance by
Pavel Čižek(
Book
)
39 editions published between 2004 and 2011 in English and held by 275 WorldCat member libraries worldwide
Statistical Tools for Finance and Insurance presents readytouse solutions, theoretical developments and method construction for many practical problems in quantitative finance and insurance. Written by practitioners and leading academics in the field this book offers a unique combination of topics from which every market analyst and risk manager will benefit
39 editions published between 2004 and 2011 in English and held by 275 WorldCat member libraries worldwide
Statistical Tools for Finance and Insurance presents readytouse solutions, theoretical developments and method construction for many practical problems in quantitative finance and insurance. Written by practitioners and leading academics in the field this book offers a unique combination of topics from which every market analyst and risk manager will benefit
Handbook of computational statistics : concepts and methods by
James E Gentle(
Book
)
22 editions published between 2004 and 2012 in English and held by 256 WorldCat member libraries worldwide
The Handbook of Computational Statistics  Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics. ¡ This new edition is divided into 4 parts in the same way as the first edition. It begins with "How Computational Statistics became the backbone of modern data science" (Ch.1): an overview of the field of Computational Statistics, how it emerged as a separate discipline, and how its own development mirrored that of hardware and software, including a discussion of current active research. The second part (Chs. 2  15) presents several topics in the supporting field of statistical computing. Emphasis is placed on the need for fast and accurate numerical algorithms, and some of the basic methodologies for transformation, database handling, highdimensional data and graphics treatment are discussed. The third part (Chs. 16  33) focuses on statistical methodology. Special attention is given to smoothing, iterative procedures, simulation and visualization of multivariate data. Lastly, a set of selected applications (Chs. 34  38) like Bioinformatics, Medical Imaging, Finance, Econometrics and Network Intrusion Detection highlight the usefulness of computational statistics in realworld applications
22 editions published between 2004 and 2012 in English and held by 256 WorldCat member libraries worldwide
The Handbook of Computational Statistics  Concepts and Methods (second edition) is a revision of the first edition published in 2004, and contains additional comments and updated information on the existing chapters, as well as three new chapters addressing recent work in the field of computational statistics. ¡ This new edition is divided into 4 parts in the same way as the first edition. It begins with "How Computational Statistics became the backbone of modern data science" (Ch.1): an overview of the field of Computational Statistics, how it emerged as a separate discipline, and how its own development mirrored that of hardware and software, including a discussion of current active research. The second part (Chs. 2  15) presents several topics in the supporting field of statistical computing. Emphasis is placed on the need for fast and accurate numerical algorithms, and some of the basic methodologies for transformation, database handling, highdimensional data and graphics treatment are discussed. The third part (Chs. 16  33) focuses on statistical methodology. Special attention is given to smoothing, iterative procedures, simulation and visualization of multivariate data. Lastly, a set of selected applications (Chs. 34  38) like Bioinformatics, Medical Imaging, Finance, Econometrics and Network Intrusion Detection highlight the usefulness of computational statistics in realworld applications
Measuring risk in complex stochastic systems(
Book
)
14 editions published in 2000 in English and held by 249 WorldCat member libraries worldwide
This collection of articles by leading researchers will be of interest to people working in the area of mathematical finance
14 editions published in 2000 in English and held by 249 WorldCat member libraries worldwide
This collection of articles by leading researchers will be of interest to people working in the area of mathematical finance
Multivariate statistics : exercises and solutions by
Wolfgang Härdle(
Book
)
34 editions published between 2007 and 2015 in English and German and held by 231 WorldCat member libraries worldwide
The authors present tools and concepts of multivariate data analysis by means of exercises and their solutions. The first part is devoted to graphical techniques. The second part deals with multivariate random variables and presents the derivation of estimators and tests for various practical situations. The last part introduces a wide variety of exercises in applied multivariate data analysis. The book demonstrates the application of simple calculus and basic multivariate methods in real life situations. It contains altogether 234 solved exercises which can assist a university teacher in setting up a modern multivariate analysis course. All computerbased exercises are available in the R or XploRe languages. The corresponding libraries are downloadable from the Springer link web pages and from the author's home pages. Wolfgang Hardle is Professor of Statistics at HumboldtUniversitat zu Berlin. He studied mathematics, computer science and physics at the University of Karlsruhe and received his Dr.rer.nat. at the University of Heidelberg. Later he had positions at Frankfurt and Bonn before he became professeur ordinaire at Universite Catholique de Louvain. His current research topic is modelling of implied volatilities and the quantitative analysis of financial markets. Zdenek Hlavka studied mathematics at the Charles University in Prague and biostatistics at Limburgs Universitair Centrum in Diepenbeek. Later he held a position at HumboldtUniversitat zu Berlin before he became a member of the Department of Probability and Mathematical Statistics at Charles University in Prague
34 editions published between 2007 and 2015 in English and German and held by 231 WorldCat member libraries worldwide
The authors present tools and concepts of multivariate data analysis by means of exercises and their solutions. The first part is devoted to graphical techniques. The second part deals with multivariate random variables and presents the derivation of estimators and tests for various practical situations. The last part introduces a wide variety of exercises in applied multivariate data analysis. The book demonstrates the application of simple calculus and basic multivariate methods in real life situations. It contains altogether 234 solved exercises which can assist a university teacher in setting up a modern multivariate analysis course. All computerbased exercises are available in the R or XploRe languages. The corresponding libraries are downloadable from the Springer link web pages and from the author's home pages. Wolfgang Hardle is Professor of Statistics at HumboldtUniversitat zu Berlin. He studied mathematics, computer science and physics at the University of Karlsruhe and received his Dr.rer.nat. at the University of Heidelberg. Later he had positions at Frankfurt and Bonn before he became professeur ordinaire at Universite Catholique de Louvain. His current research topic is modelling of implied volatilities and the quantitative analysis of financial markets. Zdenek Hlavka studied mathematics at the Charles University in Prague and biostatistics at Limburgs Universitair Centrum in Diepenbeek. Later he held a position at HumboldtUniversitat zu Berlin before he became a member of the Department of Probability and Mathematical Statistics at Charles University in Prague
Statistical methods for biostatistics and related fields by
Wolfgang Härdle(
Book
)
16 editions published between 2006 and 2007 in English and held by 220 WorldCat member libraries worldwide
Biostatistics is one of the scientific fields for which the recent developments have been extremely important. It is also strongly related to other scientific disciplines involving statistical methodology. The aim of this book is to cover a wide scope of recent statistical methods used by scientists in biostatistics as well as in other related fields such as chemometrics, environmetrics, and geophysics. The contributed papers present various statistical methodologies together with a selected scope of their main mathematical properties and their applications in real case studies, making this book of interest to a wide audience among researchers and students in statistics. Each method is accompanied with interactive and automatic Xplore routines, available online, allowing people to reproduce the proposed examples or to apply the methods to their own real datasets
16 editions published between 2006 and 2007 in English and held by 220 WorldCat member libraries worldwide
Biostatistics is one of the scientific fields for which the recent developments have been extremely important. It is also strongly related to other scientific disciplines involving statistical methodology. The aim of this book is to cover a wide scope of recent statistical methods used by scientists in biostatistics as well as in other related fields such as chemometrics, environmetrics, and geophysics. The contributed papers present various statistical methodologies together with a selected scope of their main mathematical properties and their applications in real case studies, making this book of interest to a wide audience among researchers and students in statistics. Each method is accompanied with interactive and automatic Xplore routines, available online, allowing people to reproduce the proposed examples or to apply the methods to their own real datasets
Partially linear models by
Wolfgang Härdle(
Book
)
14 editions published in 2000 in English and held by 198 WorldCat member libraries worldwide
In the last ten years, there has been increasing interest and activity in the general area of partially linear regression smoothing in statistics. Many methods and techniques have been proposed and studied. This monograph hopes to bring an uptodate presentation of the state of the art of partially linear regression techniques. The emphasis is on methodologies rather than on the theory, with a particular focus on applications of partially linear regression techniques to various statistical problems. These problems include least squares regression, asymptotically efficient estimation, bootstrap resampling, censored data analysis, linear measurement error models, nonlinear measurement models, nonlinear and nonparametric time series models
14 editions published in 2000 in English and held by 198 WorldCat member libraries worldwide
In the last ten years, there has been increasing interest and activity in the general area of partially linear regression smoothing in statistics. Many methods and techniques have been proposed and studied. This monograph hopes to bring an uptodate presentation of the state of the art of partially linear regression techniques. The emphasis is on methodologies rather than on the theory, with a particular focus on applications of partially linear regression techniques to various statistical problems. These problems include least squares regression, asymptotically efficient estimation, bootstrap resampling, censored data analysis, linear measurement error models, nonlinear measurement models, nonlinear and nonparametric time series models
Handbook of data visualization by
Chunhouh Chen(
Book
)
20 editions published between 2006 and 2008 in English and held by 155 WorldCat member libraries worldwide
"This new volume in the series Springer Handbooks of Computational Statistics gives an overview of modern data visualization methods, both in theory and practice. There are definitive chapters on modern graphical tools such as mosaic plots, parallel coordinate plots and linked views. There are chapters dedicated to graphical methodology for particular areas of statistics, for example Bayesian analysis, genomic data and cluster analysis, as well as chapters on software for graphics."BOOK JACKET
20 editions published between 2006 and 2008 in English and held by 155 WorldCat member libraries worldwide
"This new volume in the series Springer Handbooks of Computational Statistics gives an overview of modern data visualization methods, both in theory and practice. There are definitive chapters on modern graphical tools such as mosaic plots, parallel coordinate plots and linked views. There are chapters dedicated to graphical methodology for particular areas of statistics, for example Bayesian analysis, genomic data and cluster analysis, as well as chapters on software for graphics."BOOK JACKET
Statistics of financial markets : exercises and solutions by
Szymon Borak(
Book
)
31 editions published between 2004 and 2013 in English and held by 147 WorldCat member libraries worldwide
"Practice makes perfect. Therefore the best method of mastering models is working with them. This book contains a large collection of exercises and solutions which will help explain the statistics of financial markets. These practical examples are carefully presented and provide computational solutions to specific problems, all of which are calculated using R and Matlab. This study additionally looks at the concept of corresponding Quantlets, the name given to these program codes and which follow the name scheme SFSxyz123. The book is divided into three main parts, in which option pricing, time series analysis and advanced quantitative statistical techniques in finance is thoroughly discussed. The authors have overall successfully created the ideal balance between theoretical presentation and practical challenges."Publisher's website
31 editions published between 2004 and 2013 in English and held by 147 WorldCat member libraries worldwide
"Practice makes perfect. Therefore the best method of mastering models is working with them. This book contains a large collection of exercises and solutions which will help explain the statistics of financial markets. These practical examples are carefully presented and provide computational solutions to specific problems, all of which are calculated using R and Matlab. This study additionally looks at the concept of corresponding Quantlets, the name given to these program codes and which follow the name scheme SFSxyz123. The book is divided into three main parts, in which option pricing, time series analysis and advanced quantitative statistical techniques in finance is thoroughly discussed. The authors have overall successfully created the ideal balance between theoretical presentation and practical challenges."Publisher's website
The art of semiparametrics by
Stefan Sperlich(
Book
)
15 editions published in 2006 in English and held by 112 WorldCat member libraries worldwide
This selection of articles has emerged from different works presented at the conference "The Art of Semiparametrics" celebrated in 2003 in Berlin. The idea was to bring together junior and senior researchers but also practitioners working on semiparametric statistics in rather different fields. The meeting succeeded in welcoming a group that presents a broad range of areas where research on, respectively with, semiparametric methods is going on. It contains mathematical statistics, econometrics, finance, business statistics, etc. and thus combines theoretical contributions with more applied and partly even empirical studies. Although each article represents an original contribution to its own field, they all are written in a selfcontained way to be read also by nonexperts of the particular topic. This volume therefore offers a collection of individual works that together show the actual large spectrum of semiparametric statistics
15 editions published in 2006 in English and held by 112 WorldCat member libraries worldwide
This selection of articles has emerged from different works presented at the conference "The Art of Semiparametrics" celebrated in 2003 in Berlin. The idea was to bring together junior and senior researchers but also practitioners working on semiparametric statistics in rather different fields. The meeting succeeded in welcoming a group that presents a broad range of areas where research on, respectively with, semiparametric methods is going on. It contains mathematical statistics, econometrics, finance, business statistics, etc. and thus combines theoretical contributions with more applied and partly even empirical studies. Although each article represents an original contribution to its own field, they all are written in a selfcontained way to be read also by nonexperts of the particular topic. This volume therefore offers a collection of individual works that together show the actual large spectrum of semiparametric statistics
Handbook of computational finance by
JinChuan Duan(
Book
)
17 editions published between 2011 and 2012 in English and held by 63 WorldCat member libraries worldwide
Anything that is openly traded has a market price that may be more or less than its ""fair"" price. For shares of corporate stock, the fair price is likely to be some complicated function of the intrinsic current value (or ""book"" value) of identifiable assets owned by the company, the expected rate of growth, future dividends, and other factors. Some of these factors that affect the price can be measured at the time of a stock transaction, or at least within a relatively narrow time window that includes the time of the transaction. Most factors, however, relate to expectations about the futu
17 editions published between 2011 and 2012 in English and held by 63 WorldCat member libraries worldwide
Anything that is openly traded has a market price that may be more or less than its ""fair"" price. For shares of corporate stock, the fair price is likely to be some complicated function of the intrinsic current value (or ""book"" value) of identifiable assets owned by the company, the expected rate of growth, future dividends, and other factors. Some of these factors that affect the price can be measured at the time of a stock transaction, or at least within a relatively narrow time window that includes the time of the transaction. Most factors, however, relate to expectations about the futu
Basics of modern mathematical statistics : exercises and solutions by
Wolfgang Härdle(
Book
)
15 editions published between 2013 and 2016 in English and held by 62 WorldCat member libraries worldwide
The complexity of todays statistical data calls for modern mathematical tools. Many fields of science make use of mathematical statistics and require continuous updating on statistical technologies. Practice makes perfect, since mastering the tools makes them applicable. Our book of exercises and solutions offers a wide range of applications and numerical solutions based on R. In modern mathematical statistics, the purpose is to provide statistics students with a number of basic exercises and also an understanding of how the theory can be applied to realworld problems. The application aspect is also quite important, as most previous exercise books are mostly on theoretical derivations. Also we add some problems from topics often encountered in recent research papers. The book was written for statistics students with one or two years of coursework in mathematical statistics and probability, professors who hold courses in mathematical statistics, and researchers in other fields who would like to do some exercises on math statistics
15 editions published between 2013 and 2016 in English and held by 62 WorldCat member libraries worldwide
The complexity of todays statistical data calls for modern mathematical tools. Many fields of science make use of mathematical statistics and require continuous updating on statistical technologies. Practice makes perfect, since mastering the tools makes them applicable. Our book of exercises and solutions offers a wide range of applications and numerical solutions based on R. In modern mathematical statistics, the purpose is to provide statistics students with a number of basic exercises and also an understanding of how the theory can be applied to realworld problems. The application aspect is also quite important, as most previous exercise books are mostly on theoretical derivations. Also we add some problems from topics often encountered in recent research papers. The book was written for statistics students with one or two years of coursework in mathematical statistics and probability, professors who hold courses in mathematical statistics, and researchers in other fields who would like to do some exercises on math statistics
Copulae in mathematical and quantitative finance : proceedings of the workshop Held in Cracow, 1011 July 2012 by
Piotr Jaworski(
Book
)
14 editions published in 2013 in English and held by 59 WorldCat member libraries worldwide
"Copulas are mathematical objects that fully capture the dependence structure among random variables and hence offer great flexibility in building multivariate stochastic models. Since their introduction in the early 1950s, copulas have gained considerable popularity in several fields of applied mathematics, especially finance and insurance. Today, copulas represent a wellrecognized tool for market and credit models, aggregation of risks, and portfolio selection. Historically, the Gaussian copula model has been one of the most common models in credit risk. However, the recent financial crisis has underlined its limitations and drawbacks. In fact, despite their simplicity, Gaussian copula models severely underestimate the risk of the occurrence of joint extreme events. Recent theoretical investigations have put new tools for detecting and estimation dependence and risk (like tail dependence, timevarying models, etc) in the spotlight. All such investigations need to be further developed and promoted, a goal this book pursues. The book includes surveys that provide an uptodate account of essential aspects of copula models in quantitative finance, as well as the extended versions of talks selected from papers presented at the workshop in Cracow"Page 4 of cover
14 editions published in 2013 in English and held by 59 WorldCat member libraries worldwide
"Copulas are mathematical objects that fully capture the dependence structure among random variables and hence offer great flexibility in building multivariate stochastic models. Since their introduction in the early 1950s, copulas have gained considerable popularity in several fields of applied mathematics, especially finance and insurance. Today, copulas represent a wellrecognized tool for market and credit models, aggregation of risks, and portfolio selection. Historically, the Gaussian copula model has been one of the most common models in credit risk. However, the recent financial crisis has underlined its limitations and drawbacks. In fact, despite their simplicity, Gaussian copula models severely underestimate the risk of the occurrence of joint extreme events. Recent theoretical investigations have put new tools for detecting and estimation dependence and risk (like tail dependence, timevarying models, etc) in the spotlight. All such investigations need to be further developed and promoted, a goal this book pursues. The book includes surveys that provide an uptodate account of essential aspects of copula models in quantitative finance, as well as the extended versions of talks selected from papers presented at the workshop in Cracow"Page 4 of cover
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Franke, Jürgen 1952 Other Author Editor
 Simar, Léopold Other Author Editor
 Hafner, Christian Editor
 Čižek, Pavel 1973 Author Editor
 Weron, Rafał Editor
 Mori, Yuichi Editor
 Gentle, James E. 1943 Author Editor
 Hlávka, Z. (Zdeněk)
 López Cabrera, Brenda
 Borak, Szymon Author
Useful Links
Associated Subjects
Approximation theory Assetliability management Banks and banking Bioinformatics Biometry Business mathematics Capital marketMathematical models Chemometrics Commercial statistics Computer graphics Computer scienceMathematics Computer vision Copulas (Mathematical statistics) Derivative securities Distribution (Probability theory) EcologyStatistical methods Econometrics Economics Economics, Mathematical Electronic data processing Environmental sciencesStatistical methods Finance FinanceMathematical models FinanceStatistical methods Financial engineering Geophysics GeophysicsStatistical methods Image analysis Information visualization InsuranceMathematics InsuranceStatistical methods InvestmentsMathematical models Linear models (Statistics) Mathematical models Mathematical statistics Mathematical statisticsData processing MathematicsGraphic methodsData processing Money market Multivariate analysis Nonparametric statistics Pricing Regression analysis Risk managementMathematical models RiskMathematical models Robust statistics Smoothing (Statistics) Statistics StatisticsGraphic methods Timeseries analysis Wavelets (Mathematics)
Alternative Names
Haerdle, Wolfgang
Härdle, W.
Härdle, W. 1953
Härdle, W. (Wolfgang)
Hardle, Wolfgang
Härdle, Wolfgang K.
Härdle, Wolfgang K. 1953
Härdle, Wolfgang K. (Wolfgang Karl)
Härdle, Wolfgang Karl.
Härdle, Wolfgang Karl 1953
Wolfgang Härdle deutscher Statistiker und Hochschullehrer
Languages
Covers