• Complain

Henryk Gzyl - Loss Data Analysis: The Maximum Entropy Approach

Here you can read online Henryk Gzyl - Loss Data Analysis: The Maximum Entropy Approach full text of the book (entire story) in english for free. Download pdf and epub, get meaning, cover and reviews about this ebook. year: 2023, publisher: De Gruyter, genre: Computer. Description of the work, (preface) as well as reviews are available. Best literature library LitArk.com created for fans of good reading and offers a wide selection of genres:

Romance novel Science fiction Adventure Detective Science History Home and family Prose Art Politics Computer Non-fiction Religion Business Children Humor

Choose a favorite category and find really read worthwhile books. Enjoy immersion in the world of imagination, feel the emotions of the characters or learn something new for yourself, make an fascinating discovery.

No cover

Loss Data Analysis: The Maximum Entropy Approach: summary, description and annotation

We offer to read an annotation, description, summary or preface (depends on what the author of the book "Loss Data Analysis: The Maximum Entropy Approach" wrote himself). If you haven't found the necessary information about the book — write in the comments, we will try to find it.

This is an introductory course on loss data analysis. By that we mean the determination of the density of the probability of cumulative losses. Even though the main motivations come from applications to the insurance, banking and financial industries, the mathematical problem that we shall deal with appears in many application in engineering and natural sciences, where the appropriate name would be accumulated damage data analysis or systems reliability data analysis. Introductory does not mean simple: because the nature of the problems to be treated is complicated, some sophisticated tools may be required to deal with it. To conclude, we mention that all numerical examples were produced using R.This volume deals with two complementary topics. On one hand the book deals with the problem of determining the the probability distribution of a positive compound random variable, a problem which appears in the banking and insurance industries, in many areas of operational research and in reliability problems in the engineering sciences.On the other hand, the methodology proposed to solve such problems, which is based on an application of the maximum entropy method to invert the Laplace transform of the distributions, can be applied to many other problems.The book contains applications to a large variety of problems, including the problem of dependence of the sample data used to estimate empirically the Laplace transform of the random variable.- Presents tools for mathematical finance, engineering and the sciences.- Includes real-world mathematical examples and examples developed using R.- This new edition includes a chapter on a numerical approach to solving the capital allocation problem.

Henryk Gzyl: author's other books


Who wrote Loss Data Analysis: The Maximum Entropy Approach? Find out the surname, the name of the author of the book and a list of all author's works by series.

Loss Data Analysis: The Maximum Entropy Approach — read online for free the complete book (whole text) full work

Below is the text of the book, divided by pages. System saving the place of the last page read, allows you to conveniently read the book "Loss Data Analysis: The Maximum Entropy Approach" online for free, without having to search again every time where you left off. Put a bookmark, and you can go to the page where you finished reading at any time.

Light

Font size:

Reset

Interval:

Bookmark:

Make
Some traditional approaches to the aggregation problem
5.1 General remarks

In Chapters we considered some standard models for the two basic building blocks for modeling the compound random variables used to model risk. First, we considered several instances of integer valued random variables used to model the frequency of events, then some of the positive continuous random variables used to model individual losses or damage each time a risk event occurs.

From these we obtain the first level of risk aggregation, which consists of defining the compound random variables that model loss. Higher levels of loss aggregation are then obtained by summing the simple compound losses.

As we mentioned in the previous chapter, the first stage of aggregation has been the subject of attention in many areas of activity. In this chapter we shall review some of the techniques that have been used to solve the problem of determining the distribution of losses at the first level of aggregation. This is an essential step prior to the calculation of risk capital, risk premia, or breakdown thresholds of any type.

The techniques used to solve the problem of determining the loss distribution can be grouped into three main categories, which we now briefly summarize and develop further below.

  • Analytical techniques: There are a few instances in which explicit expressions for the frequency of losses and for the probability density of individual losses are known. When this happens, there are various possible ways to obtain the distribution of the compound losses. In some cases this is by direct computation of the probability distribution, in other cases by judicious use of integral (mostly Laplace) transforms. Even in these cases, in which the Laplace transform of the aggregate loss can be obtained in close form from that of the frequency of events and that of the individual severities, the final stage (namely the inversion of the transform) may have to be completed numerically.

  • Approximate calculations: There are several alternatives that have been used. When the models of the blocks of the compound severities are known, approximate calculations consisting of numerical convolution and series summation can be tried. Sometimes some basic statistics, like the mean, variance, skewness, kurtosis or tail behavior or so are known, in this case these parameters are used as starting points for numerical approximations to the aggregate density.

  • Numerical techniques: When we only have numerical data, or perhaps simulated data, obtained from a reliable model, we can only resort to numerical procedures.

5.1.1 General issues

As we proceed, we shall see how the three ways to look at the problem overlap each other. Under the independence assumptions made about the blocks of the model of total severity given by the compound variable S=n=1NXn , it is clear that

(5.1) P(Sx)=k=0P(SxN=k)P(N=k)=k=0P(n=1kXnxN=k)P(N=k)=k=0P(n=1kXnx)P(N=k)=P(N=0)+k=1P(n=1kXnx)P(N=k).

We halt here and note that if P(N=0)=p0>0 , or to put it in words, if during the year there is a positive probability that no risk events occur, or there is a positive probability of experiencing no losses during the year, in order to determine the density of total loss we shall have to condition out the no-loss event, and we eventually concentrate on

P(Sx;N>0)=k=1P(n=1kXnx)P(N=k).

To avoid trivial situations, we shall also suppose that P(Xn>0)=1 . This detail will be of importance when we set out to compute the Laplace transform of the density of losses.

From () we can see the nature of the challenge: First, we must find the distribution of losses for each n, that is given that n risk events took place, and then we must sum the resulting series. In principle, the first step is standard procedure under the IID hypothesis placed upon the Xn . If we denote by FX(x) the common distribution function of the Xn , then for any k>1 and any x0 we have

(5.2) FXk(x)=P(n=1kXnx)=0xFX(k1)(xy)dFX(y),

where FXk(x) denotes the convolution of FX(x) with itself k times. This generic result will be used only in the discretized versions of the Xn , which are necessary for numerical computations. For example, if the individual losses were modeled as integer multiples of a given amount , and for uniformity of notation we put P(X1=n)=fX(n) , in this case () becomes

(5.3) P(Sk=n)=fXk(n)=m=1nfX(k1)(nm)fX(m).

Thus, the first step in the computation of the aggregate loss distribution (), consists of computing these sums. If we suppose that FX(x) has a density fX(x) , then differentiating () we obtain the following recursive relationship between densities for different numbers of risk events:

(5.4) fXk(x)=0xfX(k1)(xy)fX(y)dy,

where fXk(x) is the density of FXk(x) , or the convolution of fX(x) with itself k times. This is the case with which we shall be concerned from Chapter on.

Notice now that if we set pk0pk/P(N>0) , then

P(SxN>0)=k=1P(n=1kXnx)pk0.

We can now gather the previous comments as Lemma .

Lemma 5.1.

Under the standard assumptions of the model, suppose furthermore that the Xk are continuous with common density fX(x) , then, given that losses occur, S has a density fS(x) given by

fS(x)=ddxP(SxN>0)=k=1fSk(x)pk0=k=1fX(k)(x)pk0.

This means, for example, that the density obtained from empirical data should be thought of as a density conditioned on the occurrence of losses. The lemma will play a role below when we relate the computation of the Laplace of the loss distribution to the empirical losses. This also shows that in order to compute the density fS from the building blocks, a lot of work is involved, unless the model is coincidentally simple.

5.2 Analytical techniques: The moment generating function

These techniques have been developed because of their usefulness in many branches of applied mathematics. To begin with, is it usually impossible, except in very simple cases, to use () to explicitly compute the necessary convolutions to obtain the distribution or the density of the total loss S.

The simplest example that comes to mind is the case in which the individual losses follow a Bernoulli distribution with Xn=1 (or any number D) with probability P(Xn=1)=q or Xn=0 with P(Xn=0)=1q . In this case n=1kXk has binomial distribution B(k,q) and the series () can be summed explicitly for many frequency distributions.

A technique that is extensively used for analytic computations is that of the Laplace transform. It exploits the compound nature of the random variables quite nicely. It takes a simple computation to verify that

Sk()=E[eSk]=(E[eX1])k.

For the simple example just mentioned, E[eX1]=1q+qe and therefore E[eSk]=(1q+qe)k from which the distribution of Sn can be readily obtained. To continue, the Laplace transform of the total severity is given by:

Lemma 5.2.

With the notations introduced above we have

(5.5) S()=E[eS]=k=0E[eSk]pk=k=1(E[eX1])kpk=GN(E[eX1])

where

GN(z):=n=0znP(N=n),for|z|<1.

We shall come back to this theme in Chapter . For the time being, note that for z=e with >0 , GN(e)=N() is the Laplace transform of the a discrete distribution.

5.2.1 Simple examples

Let us consider some very simple examples in which the Laplace transform of the compound loss can be computed and its inverse can be identified. These simple examples complement the results from Chapters .

The frequency N has geometric distribution

In this case pk=P(N=k)=p(1p)k for k0 . Therefore

Next page
Light

Font size:

Reset

Interval:

Bookmark:

Make

Similar books «Loss Data Analysis: The Maximum Entropy Approach»

Look at similar books to Loss Data Analysis: The Maximum Entropy Approach. We have selected literature similar in name and meaning in the hope of providing readers with more options to find new, interesting, not yet read works.


Reviews about «Loss Data Analysis: The Maximum Entropy Approach»

Discussion, reviews of the book Loss Data Analysis: The Maximum Entropy Approach and just readers' own opinions. Leave your comments, write what you think about the work, its meaning or the main characters. Specify what exactly you liked and what you didn't like, and why you think so.