Skip to content

douala room - Bayesian Statistics

In this work, we study the problem of learning the volatility under market microstructure noise. Specifically, we consider noisy discrete time observations from a stochastic differential equation and develop a novel computational method to learn the diffusion coefficient of the equation. We take a nonparametric Bayesian approach, where we a priori model the volatility function as piecewise constant. Its prior is specified via the inverse Gamma Markov chain. Sampling from the posterior is accomplished by incorporating the Forward Filtering Backward Simulation algorithm in the Gibbs sampler. Good performance of the method is demonstrated on two representative synthetic data examples. Finally, we apply the method on the EUR/USD exchange rate dataset.

The preprint is available at: https://arxiv.org/abs/1805.05606

The computer code is available at: https://github.com/mschauer/MicrostructureNoise.jl

 

Link to the presentation

Mortality analyses have commonly focused on countries represented in the Human Mortality Database that have good quality mortality data. We address the challenge that in many countries population and deaths data can be somewhat unreliable. In many countries, for example, there is significant misreporting of age in both census and deaths data: referred to as `age heaping’. The purpose of our research is to develop Bayesian computational methods for fitting a new model for misreporting of age for countries where their population data is affected by age heaping.

 

Link to the presentation

Given a random risk that depends on a parameter, we consider the problem of computing collective and Bayesian premiums from a robust approach. First, we consider a class of prior distributions that reflects the prior uncertainty by using distortion functions. Second, we study how the uncertainty propagates from this class of priors to collective and Bayesian premiums for a wide family of premium principles. We illustrate the results with several examples based on different claim models.

Link to the presentation

bogota room - Life Insurance

Insurance firms are major institutional investors in senior bonds and guilts whose flows are used to match the payment of due claims. The deliberate mismatch between the durations of assets and liabilities has been detrimental to life insurers ,all the more so that notably many hold a sizeable portion of in-force contracts owing minimum rates to policy holders. Legacy transactions have saddled insurers with schemes wherein they are funding positions at a higher cost than the yields of their assets.

Insurers are reflecting upon adapting a supply of products that are perceived as a valuable investment for prospective clients without weakening their Equity and harming shareholders.

What has the industry learnt so far about hidden dynamics between financial markets and their business? Are there any alternatives to the usual investment scheme which guarantee minimum returns without commandeering additional regulatory capital?

In the current survey we tackle insights derived from current studies and propose paths for prospective research on the subject.

Link to the presentation

This paper studies the phenomenon of age misrepresentation which is a specific risk for life insurers of the Sub-Saharan Africa countries. More precisely, we formalize that age misrepresentation by a random variable. Using the expected value of that random variable, we propose indicators evaluating impacts of the age misrepresentation on net premium and on mathematical reserves of a life insurance policy. Using life table of the CIMA Area, we apply the obtained indicators to three types of policies, namely, a term life insurance, a decreasing term life insurance and an education annuity. The results of simulations indicated that age misrepresentation could generates underpricing and over-reserving of life insurance contract.

Link to the presentation

Countries with common features in terms of social, economic and health systems generally
have mortality trends which evolve in a similar manner. Drawing on this, many multi-population models are built on a coherence assumption which inhibits the divergence of mortality rates between two populations, or more, on the long run. However, this assumption may prove to be too strong in a general context, especially when it is imposed to a large collection of countries. We also note that the coherence hypothesis significantly reduces the spectrum of achievable mortality dispersion forecasts for a collection of populations when comparing to the historical observations. This may distort the longevity risk assessment of an insurer. In this paper, we propose a new model to forecast multiple populations assuming that the long-run coherent principle is verified by subgroups of countries that we call the “”locally coherence”” property. Thus, our specification is built on a trade-off between the Lee-Carter’s diversification and Li-Lee’s concentration features and allows to fit the model to a large number of populations simultaneously. A penalized vector autoregressive (VAR) model, based on the elastic-net regularization, is considered for modeling the dynamics of common trends between subgroups. Furthermore, we apply our methodology on 32 European populations mortality data and discuss the behavior of our model in terms of simulated mortality dispersion. Within the Solvency II directive, we quantify the impact on the longevity risk solvency capital requirement of an insurer for a simplified pension product. Finally, we extend our model by allowing populations to switch from one coherence group to another. We then analyze its incidence on longevity hedges basis risk assessment.

 

Link to the paper

Link to the presentation

montreal room - Non life insurance

The purpose of this paper is to study, if not suggest, new methods of estimating the ultimate cost of claims at an individual level of insurance contracts. The idea of a direct estimation of the claims cost to the ultimate, makes it possible to consider reserving problem no longer as a problem that can be summarize as the modeling of intermediate claims state processes, but as a problem of regression between explanatory variables of the loss ratio and the ultimate cost of claims. Thus, a good accuracy in this kind of modelization can only be achieved if there is a significant amount of observations in our datasets. This is one of the reasons why we are focusing in this paper on the reserving of short-term, fast consolidated guarantees for which there is a substantial amount of data on closed claims, unlike long term guarantees such as liability insurance. We start our study by trying to apply to our dataset a non-parametric reserving method, drawn from a research article, with a well-defined scientific framework. However, the quality of precision of this method was not satisfying. From this method, we get inspired to put in practice a new method of reserving whose results obtained during the applications are encouraging. We used in the modelization process, machine learning algorithm belonging to the family of ensemble methods like the random forests, to which we applied the principle of random choice of the points of cut in the creation of the trees to improve the generalization power of our models. The use of Boosting methods allowed us to improve the accuracy of our models.

 

Link to the presentation

This paper considers the valuation of energy quanto options when the underlying price processes are governed by Markov-modulated additive processes, which have independent but not stationary increments within each regime. The pricing formula is obtained by using the Fast Fourier Transform (FFT) technique under the assumption that the joint characteristic function of the Markov-modulated additive processes is known analytically. As an application of our pricing formulas, we consider a quanto option written on temperature and electricity future prices. Several numerical examples illustrate the usefulness of our model to the pricing of energy quanto options.

 

Link to the paper

Link to the presentation

Policyholder capability to easily and promptly change their insurance cover, in terms of contract conditions and provider, has substantially increased during last decades due to high market competency levels and favourable regulations. Consequently, policyholder behaviour modelling acquired increasing attention since being able to predict costumer reaction to future market’s fluctuations and company’s decision achieved a pivotal role within most mature insurance markets. Integrating existing modelling platform with policyholder behavioural predictions allows companies to create synthetic responding environments where several market projections and company’s strategies can be simulated and, through sets of defined objective functions, compared. In this way, companies are able to identify optimal strategies by means of a Multi-Objective optimization problem where the ultimate goal is to approximate the entire set of optimal solutions defining the so-called Pareto Efficient Frontier. This paper aims to demonstrate how meta-heuristic search algorithms can be promptly implemented to tackle actuarial optimization problems such as the renewal of non-life policies. An evolutionary inspired search algorithm is proposed and compared to a Uniform Monte Carlo Search. Several numerical experiments show that the proposed evolutionary algorithm substantially and consistently outperforms the Monte Carlo Search providing faster convergence and higher frontier approximations.

 

Link to the paper

Link to the presentation

sydney room - Green Finance and Bitcoin

We consider a price-maker company which generates electricity and sells it in the spot market. The company can increase its level of installed power by irreversible installations of solar panels. In absence of the company’s economic activities, the spot electricity price evolves as an Ornstein-Uhlenbeck process, and therefore it has a mean-reverting behavior. The current level of the company’s installed power has a permanent impact on the electricity price and affects its mean-reversion level. The company aims at maximizing the total expected profits from selling electricity in the market, net of the total expected proportional costs of installation. This problem is modeled as a two-dimensional degenerate singular stochastic control problem in which the installation strategy is identified as the company’s control variable. We follow a guess-and-verify approach to solve the problem. We find that the optimal installation strategy is triggered by a curve which separates the waiting region, where it is not optimal to install additional panels, and the installation region, where it is. Such a curve depends on the current level of the company’s installed power, and is the unique strictly increasing function which solves a first-order ordinary differential equation. Finally, our study is complemented by a numerical analysis of the dependency of the optimal installation strategy on the model’s parameters.

Link to the paper

Link to the presentation

In this work we come up with an original trading strategy on Bitcoins. The methodology we propose is profit-oriented, and it is based on buying or selling the so called Contracts for Difference, so that the investor’s gain, assessed at a given future time t, is obtained as the difference between the predicted Bitcoin price and an apt threshold.
Starting from some empirical findings, and passing through the specification of a suitable theoretical model for the Bitcoin price process, we are able to provide possible investment scenarios, thanks to the use of a Recurrent Neural Network with a Long Short-Term Memory for predicting purposes.

Link to the presentation

The probability of successfully spending twice the same bitcoins is considered. A double-spending attack consists in issuing two transactions transferring the same bitcoins. The first transaction, from the fraudster to a merchant, is included in a block of the public chain. The second transaction, from the fraudster to himself, is  recorded in a block that integrates a private chain, exact copy of the public chain up to substituting the fraudster-to-merchant transaction by the fraudster-to fraudster transaction. The double-spending hack is completed once the private chain reaches the length of the public chain, in which case it replaces it. The growth of both chains are modeled by two independent counting processes. The probability distribution of the time at which the malicious chain catches up with the honest chain, or equivalently the time at which the two counting processes meet each other, is studied. The merchant is supposed to await the discovery of a given number of blocks after the one containing the transaction before delivering the goods. This grants a head start to the honest chain in the race against the dishonest chain.

Link to the abstract

Link to the presentation