Skip to content

lyon room - Dialog research chair covid19 session

16h30       Rama Cont (Oxford) ‘Mathematical modeling of epidemic risks: methodology and challenges’

17h00       Discussion by Alexandre Boumezoued (Milliman Paris)

17h10       Discussion by Thomas Béhar (CNP assurances)

17h20       Q&A with the audience

douala room - Non-life Insurance

Generalized linear models (GLMs) are common instruments for the pricing of non-life insurance contracts. Among other things, they are used to estimate the expected severity of insurance claims. However, these models do not work adequately for extreme claim sizes. To accommodate for these, we develop the threshold severity model in [1], that splits the claim size distribution in areas below and above a given threshold. More specifically, the extreme insurance claims above the threshold are modeled in the sense of the peaks-over-threshold (POT) methodology from extreme value theory using the generalized Pareto distribution for the excess distribution, and the claims below the threshold are captured by a generalized linear model based on the truncated gamma distribution. The threshold severity model for the first time combines the POT modeling for extreme claim sizes with GLMs based on the truncated gamma distribution. Based on this framework, we develop the corresponding log-likelihood function w.r.t. right-censored claim sizes, which is a typical issue that arises for instance in private or car liability insurance contracts. Finally, we demonstrate the behavior of the threshold severity model compared to the commonly used generalized linear model based on the gamma distribution in the presence of simulated extreme claim sizes following a log-normal as well as Burr
Type XII distribution.

References:
[1] C. Laudagé, S. Desmettre & J. Wenzel (2019), Severity modeling of extreme insurance claims for tariffication, Insurance: Mathematics and Economics, Volume 88, Pages 77-92, https://doi.org/10.1016/j.insmatheco.2019.06.002

 

Link to the paper

Link to the presentation

We propose a stochastic model for claims reserving that captures dependence along
development years within a single triangle. This dependence has a moving average
form of order p>0 and is achieved through the use of latent variables. We carry out bayesian inference on model parameters and borrow strength across several triangles, coming from different lines of businesses or companies, through the use of hierarchical priors. We carry out a simulation study as well as a real data analysis. Results show that reserve estimates, for the real dataset studied, are more accurate with our dependence model as compared to the benchmark over-dispersed Poisson that assumes independence.

 

Link to the paper

Link to the presentation

Peer-to-peer (P2P) insurance is a decentralized network in which participants pool their resources together to compensate those who suffer losses. It is a revival of a centuries-old practice in many ancient societies. With the aid of internet technology, P2P insurance is a transparent, high-tech and low-cost alternative to traditional insurance and is viewed by many as a disruptor to the traditional insurance industry in the same way Uber is to the taxi industry.
Despite the fast-changing landscape in this field, there has been no previous academic literature on the theoretical underpinning of P2P insurance. This paper presents the first effort to build the framework for the design and engineering of mutual aid and P2P insurance. Most of existing business models are developed to insure against a particular risk. However, even with the same type of risk, not all peers have the same loss. While differential pricing has well developed for traditional insurance, the fair allocation of cost for P2P insurance is not yet well understood. This paper presents a variety of P2P insurance/mutual aid models that facilitate the exchange of multiple risks and enable peers with different needs to financially support each other in a transparent and fair way.

 

Link to the presentation

bogota room - pricing

This research takes place in a context where investor behaviour differs in time and space. The purpose of this paper is to analyse BRVM shares using the arbitrage pricing model (APT) of (Ross, 1976). The originality of this study lies in the methodology and technique used. It consists in reducing the idiosyncratic risk for the determination of the common factors (risk and profitability) in the model through independent component analysis. Thus, the results obtained with the extraction techniques by Principal Component Analysis (PCA), as opposed to those obtained with the Independent Component Analysis (ICA) method, indicate that, whatever the time horizon chosen, the optimal number of factors remains four. The results also show that the finance (BOAB, BOAN), utilities (SNTS) and agriculture (SOGC, SPHC) sectors are explanatory factors for the profitability of daily returns. Then, in the case of weekly returns, the explanatory factors for profitability are: finance (BOAB, BOAN), public service (SNTS) and industry (NTLC, FTSC). On the other hand, the risk factors are for daily returns, industry (NTLC) and for weekly returns, agriculture (SOGC, PALC, SPH). In the end, only the finance sector is profitable on a monthly basis, and this is given by assets such as BOAN, BOAB and ABJC.

 

Link to the paper

Link to the presentation

In this work we present and define the superposed Ornstein-Uhlenbeck processes by following the Barndorff-Nielsen approach. The Black-Scholes model is the most referenced model in financial mathematics, but due to its limitations, particularly its constant volatility, the Black-Scholes model fails to capture the nature of the data in finance. In order to overcome this limitation, we present the Barndorff-Nielsen and Shephard model with, as a volatility model, the superposed Ornstein-Uhlenbeck processes which is sum of independent Ornstein-Uhlenbeck processes driven by Lévy process. In order to show that a financial model with constant volatility is less reliable in finance than one with stochastic process as a volatility, we will simulate the trajectories of the Black-Scholes model and that of the Barndorff-Nielsen and Shephard model and show that the Barndorff-Nielsen and Shephard model provides a better fit than the Black-Scholes model.

Renewal Pricing is a fundamental problem in non-life pricing insurance. Ideed, an important question arises, namely: how can insurance renewal prices be adjusted? Such an adjustment has two conflicting objectives. On the one hand, insurers are forced to retain existing customers, while on the other hand insurers are also forced to increase revenue. Intuitively, one might assume that revenue increases by offering high renewal prices, however this might also cause many customers to terminate their contracts. Contrarily, low renewal prices help retain most existing customers, but could negatively affect revenue. Therefore, adjusting renewal prices is a non-trivial problem for the insurance industry. In the line of Elena’s analysis, we try modelization of the renewal price adjustment problem as a sequential decision problem and, consequently, as a Markov decision process (MDP), leading to reinforcement Learning approach.

 

Link to the presentation

montreal room - Genetics and Cyber Risk

Pleiotropy refers to the phenomenon of a single gene affecting multiple traits. The identification and characterization of pleiotropy are crucial for a comprehensive biological understanding of complex traits and disease states. In recent years, genomic techniques have brought data to bear on fundamental questions about the nature and extent of pleiotropy. There is a critical need to develop statistical methods for detecting pleiotropic variants by analyzing high-throughput genotype data. We develop new statistical methods to detect pleiotropic variants using publicly available summary-level data from genome-wide association studies (GWASs). The new methods will help researchers in both estimating causal effects and searching for pleiotropic loci in publicly available datasets. Ultimately, we aim to identify variants associated with psychiatric disorders such as Major depressive disorder(MDD) and understand the complex causal relationship among a host of factors, which may play a role in the development of neuropsychiatric disorders. Our analysis detected the significant causal effect of BMI and Educational Attendance (EA) on the risk of major depressive disorder (MDD).

 

Link to the presentation

The continuous advancements in biomedical research and biotechnology are generating new knowledge and data sources that might be of interest for insurance industry. A paradigmatic example of these advancements is genetic information which can reliable inform about the future appearance of certain diseases or conditions making it an element of great interest for insurers. However, this information is considered in the highest level of confidentiality and protected from disclosure thus hindering and protecting its application for insurance purposes in many countries. Simultaneously, international regulators
encourage insurers to update their actuarial bases according to new relevant scientific knowledge. Another increasingly relevant element in recent years in biomedical research is the microbiome. Recent investigations have shown that the microbiome can be correlated with several conditions or even the risk of dying in the next fifteen years and could predict our future health. In this paper we examine the potential use of microbiome information in insurance underwriting. By using a recent dataset that discusses the Indian gut microbiome, we analysed the relation of some variables used in the underwriting process and several components of the microbiome (sets of bacterialco-abundance groups) in the organism via several Dirichlet regression models for compositional data.

 

Link to the presentation

In this paper we propose an actuarial framework and a statistical methodology allowing the quantification of Cyber claims resulting from data breaches events even when applied on few and heterogeneous data. Indeed, for now, just a few Cyber insurance claims occurred and in the same time some public databases gathered Cyber events.
We propose to take advantage of the Privacy Right Clearinghouse database, paying attention firstly on the heterogeneity caused by the evolution of both the underlying Cyber risk and the data collection process through time, secondly on the extreme events and thirdly on the uncertainty on the exposure. We investigate the heterogeneity of the reported data breaches using regression trees customized with a splitting criterion based on Generalized Pareto likelihood in order to track different behaviors of the tail of the distribution. Combining this analysis with an assessment of the frequency of the claims and a cost formula for data breaches, we compute median and extreme quantile loss estimations of a virtual Cyber insurance portfolio.

 

Link to the presentation

sydney room - Actuarial Finance

The emergence of COVID-19 have led to a revived interest in the study of infectious diseases. Mathematical models have become important tools in analyzing the transmission dynamics and in measuring the effectiveness of controlling strategies. Research on infectious diseases in the actuarial literature only goes so far as to set up epidemiological models which better reflect the transmission dynamics. This work aimed to build a bridge between epidemiological and actuarial modeling, and develop an actuarial model which provides financial arrangements to cover the expenses resulting from the medical treatments of infectious diseases.

Repeated history of pandemics such as SARS, H1N1, ebola, zika and COVID-19 have shown that contingency planning for pandemics is a necessary component of risk management for all organizations in modern society. Today’s technology allows us to use epidemiological models to predict the spread of infectious diseases in the similar way that meteorological models are used to forecast weather. Taking advantage of epidemic models, we can project what resources should be deployed at different stages of pandemics. These models provide quantitative basis for organizations to develop contingency planning and also to devise swift response strategies which can minimize the consequences of economic disruptions, such as employee absenteeism and travel restrictions.

Link to the presentation

Insurance contracts are sold by companies to customers once for all (up to surrender): there is no trade, no secondary market hence no concept of arbitrage à la “”buy low, sell high”” and no probability similar to a risk neutral Q defined on the sigma field F of financial events. Dybvig, 1992, is an early reference to “”non traded wealth””.

The historical probability P defined on the field G of all events describes the possibilities of diversification of contracts and given Q we settle for the probability QP on G with the two characterizing conditions
QP equals Q on F
and
QP( . | F) equals P( . | F),
which puts QP in-between Q and P.

We meet several properties defined in P terms which translate into QP terms, hence
amenable to finance topics. For example conditional independence, a time honored topic in insurance, can be statistically looked at in P terms and exploited in QP terms.

Expectations under QP of discounted insurance benefits provide guides to pricing, and the set of risk-neutral measures Q will guide us to super-hedging prices
to be compared to proposed prices, in particular for hybrid products.

The QP rule has been presented at Freiburg/Strasbourg FRIAS meetings and Métabief
and IHP Bachelier Seminars.

 

Link to the presentation 

This talk surrounds finance, data science, and actuarial science (with a reverse order of the conference name). We first review backward preferences, implicitly implied by the classical Merton’s problem, and forward preferences, introduced by Musiela and Zariphopoulou (2007). To demonstrate the removal of model pre-commitment by forward preferences, we numerically illustrate the real-time learning effectiveness from the market, in terms of both expected earnings and computational time. We finally conclude this talk with an application by forward preferences of pricing and hedging equity-linked life insurance contracts.

 

Link to the presentation