Skip to content

LYON ROOM - Health Insurance & Prevention

This article considers an optimization problem for an insurance buyer in the context of self- insurance, and proportional insurance. His/her risk preference is given by a distortion risk measure, more precisely he/she evaluates risk probabilities using an inverse S-shaped distortion function. This kind of distortion function helps representing more accurately how individuals perceive risk probabilities. As the agent wants to enter an insurance contract, he selects the optimal coverage and the optimal prevention effort to reduce his/her risk. The loss distribution is given by a family of stochastically ordered probability measures, indexed by the self-insurance effort. While comparing the effect of self-insurance activities on the risk measure and on the expectation, it seems that considering an inverse S-shaped distortion function leads to an indetermination on the relationship between market insurance and self-insurance. This specification leads to self-insurance and market insurance being either substitutable, meaning that the increase of one leads to the decrease of the other, or complements, meaning that the increase (respectively the decrease) of the demand for one can increase (respectively decrease) the demand for the other depending on the price elasticity.

 

Link to the presentation

The French Social Security has highlighted psychiatric diseases as a major risk. However, French private health insurance companies do not consider psychiatric care as an important topic. In this talk is presented a study (based on four real health insurance databases) of the cost of a policyholder benefiting from a psychiatric follow-up care. It aims to provide a better understanding of this risk in an insurance context. It will be shown that such a policyholder costs twice as much as an average policyholder, notably due to important expenses in hospitalization. Policyholders aged from 15 to 30 years, as well as the ones over 70 years old are the most concerned by this risk.

 

Link to the paper

Link to the presentation

DOUALA ROOM - Non-life insurance

In this talk, we propose to study the ruin problem when Incurred But Not Reported (IBNR) and Reported But Not Settled (RBNS) claims are taken into account in the insurer’s surplus process. It is assumed that accidents occur according to a Poisson point process, and each accident is accompanied by a claim developmental mark that contains the reporting time, the settlement time, and the size of (possibly multiple) payments between these two times. Under exponential reporting and settlement delays, we show that our model can be represented as a Markovian risk process with countably infinite number of states. This can in turn be transformed to an equivalent fluid flow model when the payments are phase-type distributed. As a result, ruin-related quantities such as the ruin probability can be calculated via the “psi” matrix in fluid queues. If time permits, numerical illustration with the use of a real insurance dataset will be provided. This is joint work with Soohan Ahn, Andrei Badescu and Jeong-Rae Kim.

 

Link to the paper

Link to the presentation

Actuarial Risk models need to be computed by insurers in order to cover expected losses of policies still in force and determine the price or tariff charged to the policyholder. General insurers in Ghana resort to a tariff guide provided by the NIC in determining premiums instead of using actuarial models. This research seeks to present a new method of calculating EBCT credibility factors using standard statistical computer recipes developed for analysis of variance. The EBCT model, a nonparametric approach and the ANOVA model were used on reported motor, fire, marine and accident insurance claims data of SIC Ghana to estimate the pure premium. Computational tools such as EXCEL and SPSS were used in performing the data analysis. A one sample Kolmogorov-Smirnov test at 5% level of significance showed that the reported claims were consistent with a normal distribution. The estimated credibility scores and risk premiums using the EBCT and the ANOVA were the same for both approaches. The research revealed that the expected claims in the coming year for motor, fire, marine and accident are 17.847051, 5.625175, 2.633708 and 6.309165 respectively (figures are in millions of Ghana cedis). Evidently, The ANOVA model established how credible it can be as an actuarial tool in calculating the risk premium of insurance in Ghana. A chi-square goodness of fit test at 5% level of significance certified the acceptance of the predicted claims and the validity of the model.

 

Link to the paper

Link to the presentation

A generalization of the accelerated failure time model allowing the covariate effect to be any positive function of the covariate is introduced. The covariate effect and the baseline hazard rate are estimated nonparametrically via an iterative algorithm. In an application in non-life reserving, the survival time models the development delay of a claim and the covariate effect is often called operational time. Time of underwriting serves as covariate. The estimated hazard rate is a nonparametric alternative to development factors in reserving and is used to forecast outstanding liabilities. Hence, we provide an extension of the chain-ladder framework without the assumption of independence between delay and underwriting.

 

Link to the paper

Link to the presentation

BOGOTA ROOM - data science

Insurance organisations store voluminous text data on a daily basis : free text areas used by call center agents, e-mails, customer reviews,… . These textual data $\mathcal{U} = \{X_i\}_{i=1,\cdots,U}$, sampled from the underlying distribution $P_X$, are valuable and can be used in many use cases. However, it is impossible for human experts to analyse all these quantities and $\mathcal{U}$ usually comes unlabelled. One possible solution is to construct a statistical model $\hat h$ trained on a training-set $\mathcal{L} = \{(X^*_i, y_i)\}_{i=1,\cdots,L}$ which is sampled from $\mathcal{U}$ and labelled manually. Usually $X^*_i$ is sampled randomly and therefore $X^*_i \sim P_X$. This strategy is not optimal and we can create a better training-set if we can sample data based on some criterion. Since labeled instances are difficult, time-consuming, or expensive to obtain, the goal is to propose a new sampling strategy $P^*_X \neq P_X$, therefore achieving higher accuracy using as few labeled instances as possible. This framework, which is called active learning (or “optimal experimental design”), is applied in the context of insurance text data

 

Link to the presentation

Quantifying variables importance in linear regression models is a fundamental statistical issue. This difficult task becomes even complicated when the explicative variables are correlated.
A popular approach to solve this problem is decomposing the regression R2 according to the contribution of individual variables. This attribution method is typically based on the Shapley values from Game Theory: every variables receives the part of R2 that it contributes to generate.
In this work we extend previous results considering linear models with interaction effects. We then propose an alternative estimation algorithm for computing Shapley values. This algorithm makes also possible the estimation of the generalized Shapley values. These latter indices quantify the interaction effects among correlated variables and give insights on the synergistic/antagonistic quality of these interactions.
We illustrate our findings using the NAIC Insurance Company Expenses dataset.

 

Link to the presentation

This paper aims at assessing the disruptive potential of big data technologies for insurance, with a special focus on motor products and a review of existing actuarial papers on the subject. The first part shows how statistics imposed a vertical viewpoint, that enabled insurance mechanisms by making visible regularities that remained invisible at the individual level. Despite a very granular segmentation in motor insurance, the approach has remained classificatory, with the assumption that all members of a class are identical risks. The second part focuses on the reversal of perspective implied by big data. This tremendous volume of data, served by new algorithms such as deep learning indeed shakes the vertical approach in a wide range of domains: the homogeneity hypothesis becomes difficult to maintain, the more so as predictive analytics claim to accurately predict individual results. Onboard devices that collect continuous driving behavioural data could import this new paradigm into automobile insurance. An examination of the current state of research in telematics shows however that the epistemological leap, for now, has not happened. This comes as a surprise; insurance rates seem to resist the widespread trend towards personalization. This might be explained by the very specific and collective approach to risk at the heart of actuarial science.

 

Link to the presentation

SYDNEY ROOM - Life insurance

Consumption smoothing happens at various levels. Saving for pensions is essentially consumption smoothing since it replaces a large consumption of all labor income during years of work combined with zero consumption during retirement by a more stable consumption pattern over the full life cycle.

Our concern here is merely the time local behavior of consumption rates. When working from the Merton problem in diffusive markets across standard generalizations of markets and preferences, a fundamental structure of the consumption rate remains the same. It is diffusive. Examples are the classical Merton’s problem, recursive utility, and habit formation. In all cases the consumption rate (in case of habit formation, it is the consumption rate in excess of the habit level) actually follows a (time-inhomogeneous) geometric Brownian motion, and the diffusive nature of investment returns are immediately absorbed in the level of consumption.

This local behavior is sharply distinct from the benefits in classical with-profit insurance. Those benefits are paid from a technical account that earns investment returns that are smoothed versions of the realized investment returns. This leads to differentiable benefit rates. The smoothing is obtained by a buffer account absorbing realized diffusive returns.

In the global transition away from classical with-profit insurance towards unit-link style product design, the smoothing feature is often lost, sometimes by reference to the structure of the optimal solution in a Merton market. Different ideas about how to offer some extent of consumption smoothing have been proposed, both in practice and in theory.

In this presentation we browse through some historical highlights of the subject matter aiming to terminally formulate a puzzle. Namely, one thing remains to be understood: Within axiom-based expected utility theory, which aspects of preferences create a demand for full consumption smoothing (differentiable consumption rates) in combination with active stock market participation? The questions is topical nowadays when retirees face critical financial losses in their savings. Should consumption be smoothed at all? Why/why not? and if yes, then how?

 

Link to the presentation

We consider the problem of computing the market value of future bonus payments in with-profit life insurance in the case where dividends are used to buy additional benefits. This is studied in a classic multi-state Markov setup with policyholder behavior, suitably integrated with a financial market consisting of a bank account and a risky asset. Since explicit methods do not exist in general, we consider an approach that combines simulations on the financial market with more analytical methods for calculations involving the state of the insured. This naturally leads to the computation of so-called expected accumulated bonus cash flows in each financial scenario that in general requires computation of quantities we shall term as Q-modified transition probabilities.

We show how to calculate these within a particular setup regarding the dividend yield. We introduce the shape of the insurance business consisting of key quantities on a portfolio level that the insurer needs at future time points for deciding on its dividend yield and investment strategy, so-called controls. Within this setup, our main result is a system of forward differential equations satisfied by the Q-modified transition probabilities. The result allows for explicit computation of expected accumulated bonus cash flows in each financial scenario, and a procedure is then presented in order to demonstrate how one can use these to compute the market value of future bonus payments in practice.

In the last part of the work we discuss how one may obtain significant numerical advantages in the implementation if the number of bonus benefits can be assumed to be adapted to financial information only. We find sufficient conditions on the dividend yield enabling this, and we show how the computation of shapes, controls and the expected accumulated bonus cash flows in each financial scenario numerically simplifies in this case.

Keywords: With-profit life insurance; Dividends and bonus; Market valuation in life insurance; Future discretionary benefits; Shapes and controls; Scenario-based projection.

Financial insolvency of many private pension funds has put Defined Benefit (DB) pension plans under a microscope over the last few decades. Although the government imposes rules to ensure minimum required funding, some sponsors might choose to underfund the plans for short term benefits. This paper investigates the influences— plan and firm specific characteristics, and enforcement of full funding limits— on sponsor contributions to single-employer defined benefit pension plans in the US private sector for years 1991-2017. We apply a Heckman model to the voluntary contributions of the firms which eliminates sample selection bias due to firm decisions to contribute only the legally required minimum contribution. Sponsors are less likely to contribute during economic booms, but contribute more once they have decided to contribute. We find that a lower pension plan funding ratio than required to achieve the fully funded position increases the likelihood of contribution. The imposition of full funding limitation has a positive marginal effect on the voluntary contribution as compared to absence of the tax advantage from revocation of the limitation.

 

Link to the presentation