Skip to content

lyon room - Optimal risk sharing

We consider the optimal reinsurance problem for several dependent risks, assuming a maximal expected utility criterion and independent negotiation of reinsurance for each risk.

Without any particular hypothesis on the dependency structure, we show that optimal treaties exist in a class of independent randomized contracts.
We derive optimality conditions and show that under mild assumptions the optimal contracts are of classical (non-randomized) type.

 

Link to the presentation

This paper studies bilateral risk-sharing with no aggregate uncertainty, when agents maximize rank-dependent utilities. We characterize the structure of Pareto optimal risk-sharing contracts in full generality. We then derive a necessary and sufficient condition for Pareto optima to be no-betting allocations (i.e., deterministic allocations), thereby answering the question of when sunspots do not exist in this economy. This condition depends only on the probability weighting functions of the two agents, and not on their (concave) utility functions.

 

Link to the paper

Link to the presentation

This paper unifies the work on multiple reinsurers, distortion risk measures, premium budgets, and heterogeneous beliefs. An insurer minimizes a distortion risk measure, while seeking reinsurance from finitely many reinsurers. The reinsurers use distortion premium principles, and they are allowed to have heterogeneous beliefs regarding the underlying probability distribution. We provide a characterization of optimal reinsurance indemnities, and we show that they are of a layer-insurance type. This is done both with and without a budget constraint, i.e., an upper bound constraint on the aggregate premium. Moreover, the optimal reinsurance indemnities enable us to identify a representative reinsurer in both situations. The existence of a representative reinsurer means that all reinsurers can be treated collectively by means of a hypothetical premium principle in order to determine the optimal total risk that is ceded to all reinsurers. The optimal total ceded risk is then allocated to the reinsurers by means of an explicit solution. Finally, two examples with the Conditional Value-at-Risk illustrate our results.

Link to the paper

Link to the presentation

douala room - Motor Insurance

An algorithm to fit regression models aimed at predicted the average responses beyond a conditional quantile level is presented. This procedure is implemented in a case study of insured drivers covering almost 10,000 cases. The aim is to predict the expected yearly distance driven above the legal speed limits as a function of driving patterns such as total distance, urban and night percent driven. Gender and age are also controlled. Results are analyzed for the median and the top decile. The conclusions provide evidence of factors influencing speed limit violations for risky drivers and they are interesting to price motor insurance and to promote road safety. The efficiency of the algorithm to fit tail expectation regression is compared to quantile regression. Computational time doubles for tail expectation regression compared to quantile regression. Standard errors are estimated via bootstrap methods. Further considerations regarding in-sample predictive performance are discussed. In particular, further restrictions should be imposed in the model specification to avoid prediction outside the plausible range.

Link to the paper

Link to the presentation

We propose a recommendation system built for a better customers’ experience, by suggesting them the most appropriate additional cover. Our recommendation system is currently in use by Foyer Assurances agents, to help them suggesting the best additional cover to their customers for their car insurance product.

Our tool helps them by automatically selecting from their large portfolios the customers most likely to augment their insurance coverage. The requirement for this system is to perform a more efficient up-selling than classic marketing campaigns. Recently, the

applicability of machine learning algorithms have become very popular in many different areas of knowledge allowing to learn up-to-date advanced patterns from customers’ behavior and consequently target customers more accurately. In the context of recommendation system, the use of such algorithms could allow us to generate more relevant commercial opportunities for customers.

Most of the numerous recommendation systems existing in literature, which are mainly suited for online platforms (videos, e-commerce, etc.) and are appropriate for large-scale problems, would not   fit for insurance covers recommendation.

Indeed, the insurance context differs by three major singularities:

– Data dimensions: the number of covers is limited, in comparison with thousands of books or movies proposed by online platforms;

– Trustworthiness: a high level of confidence in recommendations for insurance customers is needed. While recommending a wrong movie is not a big deal since the viewer will always find another option from thousands of videos, recommending an inappropriate insurance cover could damage significantly the trust from customers in their insurance company;

– Constraints: while any movie or any book could be enjoyed by anyone (except for age limit), some guarantees could have an overlap, or some criterion linked to customers’ profile (age, no-claims bonus level, vehicle characteristics, etc.).

The major contributions of our work are to:

– propose an architecture in function of this insurance context and which differs from classic approaches, by associating a probability of accepting a recommendation to the next best offer. We combined Apriori algorithm to determine which cover is the most suited for each customer and XGBoost algorithm to determine which customer is the most likely to accept an additional cover;

– back-test the recommendation system with relevant indicators showing that our approach gives better results than classic models (including SVD);

– present a pilot phase which allowed us to test our recommendation system on hundreds of customers. The recommendation system performs an acceptance rate of 38% on its pilot phase, which is a promising result since classic rates for such a marketing campaign are around 15%.

To improve the proposed recommendation system, future work should be about:

– Explainability of recommendations;

– Integration of new relevant features (e.g. information about contacts between customers and agents);

– Spread to other products (e.g. home insurance);

– Specific work on life events prediction: when a life event occurs to a customer, it sometimes means that this customer has to adjust his cover. Hawkes processes could be an appropriate model.

 

Link to the presentation

In this article, we show that a new penalty function, which we call log-adjusted absolute deviation (LAAD), emerges if we further theoretically extend the Bayesian LASSO using conjugate hyperprior distributional assumptions. Further, it turns out that the estimator with LAAD penalty has closed-form in the case with single covariate and it can be extended to general cases combined with coordinate descent algorithm with assurance of convergence under mild condition. This has the advantages of avoiding unnecessary model bias as well as allowing variable selection, which is linked to the choice of tail factor in loss development framework. We calibrate our proposed model using multi-line insurance dataset from a property and casualty company where we observed reported aggregate loss along the accident years and development periods.

Link to the paper

Link to the presentation

bogota room - Data Science

The presentation is based on a recent paper we released on SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3528616
In the paper, we construct a pipeline to investigate heuristic diversification strategies in asset allocation. We use machine learning concepts (“”explainable AI””) to compare the robustness of different strategies and back out implicit rules for decision making.

In a first step, we augment the asset universe (the empirical dataset) with a range of scenarios generated with a block bootstrap from the empirical dataset.

Second, we backtest the candidate strategies over a long period of time, checking their performance variability. Third, we use XGBoost as a regression model to connect the difference between the measured performances between two strategies to a pool of statistical features of the portfolio universe tailored to the investigated strategy.

Finally, we employ the concept of Shapley values to extract the relationships that the model could identify between the portfolio characteristics and the statistical properties of the asset universe.

We test this pipeline for studying risk-parity strategies with a volatility target, and in particular, comparing the machine learning-driven Hierarchical Risk Parity (HRP) to the classical Equal Risk Contribution (ERC) strategy.

In the augmented dataset built from a multi-asset investment universe of commodities, equities and fixed income futures, we find that HRP better matches the volatility target, and shows better risk-adjusted performances. Finally, we train XGBoost to learn the difference between the realized Calmar ratios of HRP and ERC and extract explanations.

The explanations provide fruitful ex-post indications of the connection between the statistical properties of the universe and the strategy performance in the training set. For example, the model confirms that features addressing the hierarchical properties of the universe are connected to the relative performance of HRP respect to ERC.

Link to the paper

Link to the presentation

One of the main challenges for life actuaries is adequately modeling and predicting the future mortality evolution. To this end, starting from the pivotal approach of the Lee-Carter model, several extensions and variants of this model have been proposed in literature. All of them essentially based on the ARIMA models in order to describe the future mortality trend. Recently, some research works have shown the suitability of machine and deep learning techniques to improve mortality modeling and, referring to forecasting, to obtain competitive and outperforming results compared to ARIMA models. The present work focuses on the application of a Recurrent Neural Network model, the Long Short-Term Memory (LSTM), in the framework of the Lee-Carter model. The LSTM model is designed to model and predict sequential data, such as time series, capturing hidden patterns within data and repeating them if significant. In mortality modeling, this means that the mortality rates predicted over time take into account the hidden features of the past phenomenon not captured by an ARIMA model. We provide either a point forecast and a desired confidence interval. Finally, a case study is implemented for Italy, distinguishing by gender.

 

Link to the presentation

In this paper, we study the cause of the burst of the Bitcoin bubble rooted in the impacts of attention to news media on Bitcoin. We analyse textual data taken from news articles on Bitcoin and investigate the predictive and causal power of extracted information to model the dynamics of Bitcoin prices. In doing so, we apply the Latent Dirichlet Allocation (LDA) model to classify news article into some topics and measure the unusualness of each topic in a daily basis. We implement a regression discontinuity design to make an inference on how news media about Bitcoin has shaped changing dynamics in Bitcoin prices. From this quasi-natural experiment, we find that early on, the flow of information on blockchain and Fintech shifted the attention of traders to an unregulated market like Bitcoin, consequently expanding the bubble. During the bubble phase, uncertainty surrounding the security of this digital currency resulted in the bursting of the bubble.

 

Link to the presentation

montreal room - Enterprise Risk Management and IFRS

Risk aggregation and capital allocation are of paramount importance in business, as they play critical roles in pricing, risk management, project financing, performance management, regulatory supervision, etc. The state-of-the-art practice often includes two steps: (i) determine standalone capital requirements for individual business lines and aggregate them at a corporate level; and (ii) allocate the total capital back to individual lines of business or at more granular levels. There are three pitfalls with such a practice, namely, lack of consistency, negligence of cost of capital, and disentanglement of allocated capitals from standalone capitals.

In this paper, we introduce a holistic approach that aims to strike a balance between competing interests for various stakeholders and conflicting priorities in a corporate hierarchy. In spite of the unconventional strategy, the new approach leads to the allocation of diversification benefits, which is common in many risk capital frameworks including regulatory capital and economic capital. The resulting “”all-in-one”” capital setting and allocation principle provides a remedy to many problems with the existing two-step practice in the financial industry.

 

Link to the presentation

In this talk, we discuss about a model risk management framework through a list of items and tools essential to an effective models review :

– Governance and use of models,
– data, methods and validation,
– audit and reporting to board members.

These elements are mainly intended for pratictioners and companies wishing to improve their model risk management system and meet the ever-increasing compliance requirements.

 

Link to the presentation

The general principles for determining the financial performance of a company is that revenue is earned as goods are delivered or services provided, and that expenses in the period are made up of the costs associated with this earned revenue. To follow these principles in the insurance industry is a complex task. The premium payments are typically made upfront, and can provide coverage for several years, or be paid many years before the coverage period starts. The associated costs are often not fully know until many years later. Hence, complexity arises both in determining how a premium paid should be earned over time, and in valuing the costs associated with this earned premium.
IFRS 17 attempts to align the insurance industry with these general accounting principles. We bring this new accounting standard into the realm of actuarial science, through a mathematical interpretation of the regulatory texts, and by defining the algorithm for profit or loss in accordance with the new standard. Furthermore, we suggest a computationally efficient risk-based method of valuing a portfolio of insurance contracts and an allocation of this value to subportfolios. Finally, we demonstrate the practicability of these methods and the algorithm for profit or loss in a large-scale numerical example.

 

Link to the paper 

Link to the presentation

sydney room - Variable annuities - Risk Measurement

We introduce a mathematical framework for asset-liability management of stable value fund guarantees.
Stable values are 401(k) retirement plan investment options, synthesized through book value accounting nd insured at the crediting rate of return.

We first study the benefits of book value accounting; a mechanism that buffers short-term crisis and transforms a volatile fund into a more stable one.

We then present a stochastic model for assets and dynamic lapse model for the liabilities, we compare the crediting rate of return from the model with the conventional formula used by the stable value market.

We then present hypothetical reverse stress tests and hypothetical cases of risk during past crisis, including the financial crisis, early 1980s recession and the great depression.

 

Link to the presentation

Variable annuities (VAs) are personal savings and investment products with long-term financial guarantees. They offer desirable investment features and have become popular among U.S. households looking to enhance their retirement savings. VAs now account for nearly $2 trillion in net assets and make up the largest category of liabilities for the U.S. life insurance industry. Since investors allocate their contributions among various mutual funds, VAs also represent a sizable share of the mutual fund sector.

Unlike mutual funds, however, VA investments enjoy downside protection from the insurer in the form of financial guarantees. These guarantees carry large amounts of systematic risk and have caused major concerns to VA providers and insurance regulators, as hedging effectiveness is complicated by their long-term nature (think 10-25 years and potentially longer) and complex payout profiles. In addition—as we document in this study—the hedging of VA liabilities is severely impeded by basis risk, that is the discrepancy between the returns of the mutual funds underlying the guarantees and the returns of the hedging instruments used by the VA providers.

This is the first comprehensive study to empirically quantify the magnitude of basis risk in U.S. VA products. We are also the first to enhance traditional fund mapping techniques with machine learning methods. Specifically, for each of the 1,890 VA-underlying mutual funds in our sample, we use LASSO regressions in combination with external fund information (Lipper Objective Codes) to identify the most suitable mapping instruments (among a large set of 470 ETFs).

We find that the inclusion of data analytic techniques substantially improves the fund mapping over the traditional ad-hoc methods. However, even so, on average over 25% of the risk (volatility) of fund returns cannot be eliminated, no matter how sophisticated the provider’s hedging strategy. Considering the long maturity of typical VA policies, the (systematic) risk embedded in the underlying guarantees is enormous, and as a result, VAs pose a more substantial threat to the long-term solvency of U.S. life insurers than currently assumed. We document that this high level of basis risk is pervasive across most Lipper Objective Code classes, asset classes, and fund types. Our findings are independent of asset return models and hedging strategies and provide VA providers insurers with insights on how to mitigate their basis risk exposure.

 

Link to the presentation

One can think that a larger portfolio is preferred to the insurer company (referring to the law of large numbers). However, it may be worth examining the question from a decision theory perspective. We discuss the case of insurance oligopolies using the Bertrand model. We show that the relation between insurance companies and portfolio size is crucial to study the structure of the market, and the equilibrium can be different from the traditional product market equilibrium. We define the substance aversion, neutral and seeking behavior, and we illustrate the market equilibriums in these three cases through various examples. Assuming substance neutral insurers we can see the traditional product market equilibrium in the insurance market. Otherwise, we may face some market anomalies in other cases, insurers may realize extra profit, or there may be only one insurance company in the market, if we assume that the insurers are substance seeking. Furthermore we examine the connection between the substance preference and the concept of absolute measure of risk and proper risk aversion in our examples.

 

Link to the presentation