Skip to content

douala room - Data Science

This paper reviews Learning (ML) algorithms and summarises its applications in future Insurance Markets. New AI algorithms are constantly emerging, with each ‘strain’ mimicking a new form of human learning, reasoning, knowledge, and decision-making. The current main disrupting forms of learning include Deep Learning, Adversarial Learning, Transfer and Meta Learning and Federated Learning.
These new models and applications will drive changes in future Insurance Markets, so it is important to understand their computational strengths and weaknesses.

In this paper we review AI, ML and associated algorithms in the Insurance context, their computational strengths and weaknesses, and discuss their future impact on the Insurance Markets. We also discuss AI algorithms explainability which is particularly of importance in the regulated Markets.

Link to the presentation

Accurate modeling and forecasting of human mortality rates is important in actuarial science, to price life insurance products, pension plan evaluations, and in finance, to price derivative products used to hedge longevity risk. Data shows that mortality rates have been decreasing at all ages over time, especially in the last century. Predicting the extent of future longevity improvement represents a difficult and important problem for the life insurance industry and for sponsors of pension plans and social security programs.

The most popular methodology to forecast future mortality improvement was proposed by Lee and Carter (1992, JASA). It consists of a two-steps process, shown to suffer identifiability issues, both in the Lee-Carter Model and its subsequent extensions, mostly due to the inherent two-steps model setup. We propose a very distinct, data-driven approach using a class of deep neural networks to model and forecast human mortality.
The main component in the neural networks is a long short–term memory (NNLS-TM) layer, which was introduced by Hochreiter and Schmidhuber (1997, NC), to fix vanishing gradients in simple recurrent neural networks. The model can be constructed for short–term as well as for long-term forecasting, respectively.

We model the dependence mortality improvement observed simultaneously in different countries. Current mortality improvement models are fitted to single country sub-populations separately, even if improvement trends are similar in different
countries.

The multi–population problem presents serious computational challenges that we tackle with NNLS-TMs, fitted to learn from single country populations included in the Human Mortality
Database (https://www.mortality.org/).

Link to the presentation

Machine learning (ML) has become one of the many useful tools that are used in the field of quantitative finance. As this area of research is still young, there are many questions that need to be answered. That essentially includes the comparison of the existing techniques with the ones that are suggested by new ML methods. In this talk we are examining the usefulness of Deep Reinforcement Learning (DRL), comparing it with the delta hedging as a benchmark. Specifically, we use Deep Deterministic Policy Gradient (DDPG) method and see how this method can converge to a correct solution in a Black-Scholes framework.

bogota room - Extreme Value Theory

In several applications of heavy tail modelling, the assumed Pareto behavior is tempered ultimately at the largest data. In insurance applications claim payments are influenced by claim management so that claims are subject to a higher level of inspection at highest damage levels leading to weaker tails than apparent from modal claims. At other instances the nature of the measurement process may cause under recovery of the largest values. Inspired by applications in geophysics and finance, Meerschaert et al. (2012) studied parameter estimation for exponential tempering of a simple Pareto distribution. Raschke (2019) discussed applications in insurance using Weibull tempering.
In this paper, we generalize the results of these recent papers to tempering of a Pareto-type distribution with a Weibull distribution in a peaks-over-threshold approach. This requires to modulate the tempering parameters as a function of the chosen threshold. We use a pseudo maximum likelihood approach to estimate the model parameters, and consider estimation of return periods of extreme levels and of extreme quantiles. We report on some simulation experiments, provide basic asymptotic results and discuss insurance applications.

 

Link to the paper

Link to the presentation

A phase-type distribution is the distribution of the time until absorption of a time-homogeneous Markov jump process defined on a finite state-space where one of the states is absorbing, and the rest are transient. Phase-type distributions have been employed in a variety of contexts since they often provide exact, or even explicit, solutions to important problems in complex stochastic models, this is the case, for example, in renewal theory, queueing theory, and risk theory. Moreover, they form a dense class in the class of distributions on the positive reals. However, one of the main concerns on the use of phase-type models in applications is that, by construction, phase-type tails are always light (Exponential type), making them less suitable where heavy tails are present. Recently, the class of inhomogeneous phase-type distributions was introduced in Albrecher and Bladt [Journal of Applied Probability, 56(4):1044-1064, 2019] as a dense extension of classical phase-type distributions, which leads to more parsimonious models in the presence of heavy tails. In this talk, we propose a fitting procedure for this class to given data. We furthermore consider an analogous extension of Kulkarni’s multivariate phase-type class to the inhomogeneous framework and study parameter estimation for the resulting new and flexible class of multivariate distributions. The performance of the algorithms is illustrated in several numerical examples.

 

Link to the presentation

Recently the authors have introduced a procedure for modifying the argument of a distribution function F(x) through a suitable function g(x) in order to improve data fits. The new distribution function F(g(x)) tailored for improving fits has been called a doped distribution function. In this paper we apply this procedure in order to review modeling for loss data. We focus on modeling of the whole data which is challenging due to the simultaneous presence of low and high values. In particular, we analyze the well-known Danish Fire insurance losses, which have been extensively analyzed by scholars. Remarkable improvements in data fits are found. Impacts of new data fits are exposed, namely when considering risk measures.

 

Link to the presentation

montreal room - Financial pricing

In this paper we start from the empirical findings on the behaviour of future prices in commodity markets and propose a continuous time model that allows to represent in a semi analytical form the price function. In particular, we study the term structure of future prices under the assumption that the underlying asset price follows an exponential CARMA(p,q) model where the driving noise is a Time Changed Brownian motion. The obtained formula is strictly connected to the cumulant generating function of the subordinator process. The main advantages of the proposed model are the possibility to work directly with market data without requesting regular grid and its ability to capture complex time dependent structures through different shapes of the autocovariance function.

Link to the presentation

This paper considers the valuation of energy quanto options when the underlying price processes are governed by Markov-modulated additive processes, which have independent but not stationary increments within each regime. The pricing formula is obtained by using the Fast Fourier Transform (FFT) technique under the assumption that the joint characteristic function of the Markov-modulated additive processes is known analytically. As an application of our pricing formulas, we consider a quanto option written on temperature and electricity future prices. Several numerical examples illustrate the usefulness of our model to the pricing of energy quanto options.

Link to the presentation

The problem of pricing American-type basket derivatives has become relevant in actuarial science with the development of equity-linked insurance products. Existing methods when the underlying fund consists of a single asset are rather standard, but the problem becomes more challenging when the fund is composed of multiple assets. In this latter case, we say that the option embedded in the equity-linked product is written on a basket of assets.

In this paper, we consider the pricing of American-type basket derivatives by numerically solving a partial differential equation (PDE). The curse of dimensionality inherent in basket derivative pricing is circumvented by using the theory of comonotonicity. We start with deriving a PDE for the European-type comonotonic basket derivative price, together with a unique self-financing hedging strategy. We show how to use the results for the comonotonic market to approximate American-type basket derivative prices for a basket with correlated stocks. Our methodology generates American basket option prices which are in line with the prices obtained via the standard Least-Square Monte-Carlo approach. Moreover, the numerical tests illustrate the performance of the proposed method in terms
of computation time, and highlight some deficiencies of the standard LSM method.

Link to the paper

Link to the presentation