Skip to content

lyon room - Credit and Default Risks

Before the 2008 financial crisis, most research in financial mathematics focused on pricing derivatives without considering the effects of counter parties’ default, illiquidity problems, and the role of the repurchase agreement (Repo) market. In our research, we apply an alternating renewal process to describe the switching between different financial regimes and develop a framework for pricing a European claim. The price is characterized as a solution to a backward stochastic differential equation (BSDE), and we prove the existence and uniqueness of this solution. In a numerical study based on a deep learning algorithm for BSDEs, we compare the effect of different parameters on the valuation of it.

Link to the paper

Link to the presentation

In this paper we investigate two different topics related to the impact of COVID-19 on financial markets. We first conduct an event study analyzing the difference between volatility, estimated by a GED-GARCH(1,1), and the out-of-sample predictions after the shock that we consider as counterfactual. In the second part, we test for the “inefficient markets effect” on volatility. In particular, we use the multi-fractional Brownian motion model to compute our market inefficiency measure, defined as the difference between Hurst exponent under efficient markets (H=0.5) and Hurst Exponent estimated with the AMBE method of Bianchi et al. (2013). Whilst before the COVID-19 shock we get evidence of weak explanatory power of inefficiency for volatility, we instead notice a strong relationship after the shock. Furthermore, we extend the analysis to two relevant financial crises (1987 and 2008) finding almost the same results. Then we develop a Difference in Difference analysis, demonstrating that the COVID-19 event generated higher inefficiency and, therefore, higher volatility. We extend again the analysis to the 1987 and 2008 financial crises, getting the same evidence.

Link to the presentation

Recent proposals for the reform of the euro area advocate the creation of a market in synthetic securities backed by portfolios of sovereign bonds. Most debated are the so-called European Safe Bonds or ESBies proposed by Brunnermeier-et-al-17. Since the potential benefits of ESBies hinge on the assertion that these products can fulfill the function of a safe asset for the euro area, this paper provides a comprehensive quantitative analysis of ESBies that is relevantalso for the analysis of eurobonds. Our first contribution is a novel dynamic credit risk model which captures salient features of sovereign CDS spreads in the euro area.
After successful calibration of our model, we study in detail the risks associated with ESBies. We discuss model-independent price bounds and the rating of ESBies, we analyse the impact of model parameters and attachment points on the size and the volatility of the credit spread of ESBies and we consider several approaches to assess the market risk of ESBies. We conclude with a brief discussion of the policy implications from our analysis.

Link to the paper

Link to the presentation

douala room - Data Science

Family history could be one of the biggest factors that the insurance company looks at when applying for a life insurance policy. We have in mind things such as a family history of cardiovascular diseases, death by cancer, or family history of high blood pressure and diabetes, that could result in higher premiums or no coverage at all. In this article, we use massive (historical) data to study dependencies between life length with families. If joint life contracts (between a husband and a wife) have been long studied in actuarial literature, little is known about child and parents dependencies. We will illustrate those dependencies using XIXth century family trees in France, and quantify implications in annuities computations.

Link to the presentation

We develop Gaussian process (GP) models for incremental loss ratios in loss development triangles. Our approach brings a machine-learning, spatial-based perspective to stochastic loss modeling. GP regression offers a non-parametric probabilistic distribution regarding future losses, capturing uncertainty quantification across three distinct layers: model risk; correlation risk; extrinsic uncertainty due to randomness in observed losses. To handle statistical features of loss development analysis, namely spatial non-stationarity, convergence to ultimate claims, and heteroskedasticity, we develop several novel implementations of fully-Bayesian GP models. We perform extensive empirical analyses over the NAIC loss development database across six business lines, comparing and demonstrating the strong performance of our models. Our computational work is performed using R and Stan programming environments.

Link to the presentation

Loss development, in terms of loss development factors (LDF) and incremental loss ratio (ILR) in P&C insurers’ business lines, can be viewed as functional data across development period and has discrete observations in NAIC Schedule P loss triangles. Regulators, reinsurers and other parties’ may wish to learn from a large number of triangles to find out patterns and anomalies of loss development in the market. Relying on robust principal component analysis (RPCA), we study the ILR of workers’ compensation line across hundreds of companies and over 13 years. RPCA can be applied to functional data and helps us to (i) detect and isolate outlying loss development; (ii) reduce the dimension of functional data to a few factors that can be interpreted to short-term, mid-term and long-term loss development. Our analysis shows that companies with different business focus and regional focus have distinctive development patterns. Also, the loss development during a period of late 80s differ from the 90s. As a key contribution, our findings provide a more profound understanding of loss development in the market as well as easy-to-use analysis and visualization tools.

bogota room - Mortality modelling and Pensions

In the context of global aging population, improved longevity and low interest rates, the question of pension plan under-funding and adequate elderly financial planning is gaining awareness worldwide, both among experts and in popular media. Additional emergence of societal changes – Peer to Peer business model and Financial Disintermediation – might have contributed to the resurgence of “Tontine” in various papers and the proposal of further models such as Tontine Pensions (Forman & Sabin, Survivor Funds, 2016), ITA – Individual Tontine Accounts (Fullmer & Sabin, 2018), Pooled-survival fund (Newfield, 2014), and Pooled Annuity Funds (Donnelly, Actuarial fairness and solidarity in pooled annuity funds, 2015) to name a few.

In this presentation, we revisit the ITA mechanism proposed by (Fullmer & Sabin, 2018) – which allows the pooling of Individual Annuities through a self-insured community. This “Tontine” generalization retains the flexibility of an individual design: open contribution for a heterogeneous population, individualized asset allocation and predesigned annuitization plan. The actuarial fairness is achieved by allocating the deceased proceedings to survivors using a specific individual pool share which is a function of the prospective expected payouts for the period considered.

After a brief introduction, this presentation provides a formalization of the mathematical framework, analyses simulated outcomes based on various assumptions, proposes technical solutions to overcome shortcomings, and discusses more generally requirements for a practical implementation.

 

Link to the paper

Link to the presentation

Modelling population mortality pattern and how they evolve over time has always been a complex issue. Since the first Age, Period and Cohort models appeared in the 1980, many complex variations of those models have been proposed, the most popular being the Lee-Carter and Cairns-Black-Dowd models and their numerous extensions that include cohort effects. One thing all those models have in common is their parametric nature. While the simpler models do not provide a good fit to the data, the more complex somehow lack interpretability. Furthermore, parameter inference usually require several steps due their non-linear nature and the difficulty to separate Period and Cohort effects. In addition, forecast is then performed separately. The consequence of those complex procedures is that information about infered parameter variance is lost along the way.

In this presentation, we propose a comprehensive decomposition of national mortality pattern including Age, Period and Cohort marginal effects as well as Age and Period interaction terms. This decomposition is made possible by the use of mixed models. Our approach removes the restrictions associated with parametric models, while keeping a high degree of interpretability. Identifiability issues associated with the use of simultaneous cohort and period effects are removed. Parameter inference and forecast are performed in a single step. As it is based on a single generalized linear model, our approach also allows for straightforward confidence intervals for both model parameters and predicted values. This method is applied to several countries relying on data from the Human Mortality Database.

 

Link to the presentation

We introduce four variants of the common age effect model proposed by Kleinow (2015), which describes the mortality rates of multiple populations. Our model extensions are based on the assumption of multiple common age effects, each of which is shared only by a subgroup of all considered populations. We apply different clustering methods to identify suitable subgroups.

Some of the clustering algorithms, like k-Means, are borrowed from the unsupervised learning literature, while others, like the augmented common factor clustering algorithm, which was in principle introduced by Li and Lee (2005), are rather domain-specific. In particular, we consider a fuzzy clustering which is obtained by maximum likelihood estimation.

Our goal is to improve model fit and forecasting performance while keeping the number of parameters small. Due to their good interpretability, our clustering-based models also allow some insights in the historical mortality dynamics of the populations under consideration.

Numerical results and graphical illustrations of the considered models and their performance in-sample as well as out-of-sample are provided.

 

Link to the presentation