parallel session 2
tuesday april 28th
11:45 - 1:00
LYON ROOM - Pensions and Long-Term Care
Birth rates have dramatically decreased and, with continuous improvements in life expectancy, pension expenditure is on an irreversibly increasing path. This will raise serious concerns for the sustainability of the public pension systems usually financed on a pay-as-you-go (PAYG) basis where current contributions cover current pension expenditure. With this in mind, the aim of this paper is to propose a mixed pension system that consists of a combination of a classical
PAYG scheme and an increase of the contribution rate invested in a funding scheme. The investment of the funding part is designed so that the PAYG pension system is financially sustainable at a particular level of probability and at the same time provide some gains to individuals. In this sense, we make the individuals be an active part to face the demographic risks inherent in the PAYG and re-establish its financial sustainability.
Population ageing is a global trend and many countries including China face increasing pressures to provide long-term care services for the elderly. We explore new mechanisms to fund long-term care using housing wealth. We conduct and analyze an experimental online survey fielded in China that assesses the potential demand for new financial products that allow individuals to access their housing wealth to buy long-term care insurance. We find in our sample of 1,200 Chinese homeowners aged 45-64 that the stated demand for long-term care insurance increases when individuals can use housing wealth in addition to savings to purchase long-term care insurance. Individuals prefer to access housing wealth via reverse mortgage loans rather than via home reversion which is a partial sale of housing wealth. Our results inform current policy reforms in China which aim at developing the private market for health and long-term care insurance products.
BOGOTA ROOM - Data Science
New Solvency regimes, new Accounting Standards, new ERM/ORSA frameworks put high challenges to actuaries. In parallel IT and data science / AI are evolving at a huge pace. Is it today easier for an actuary to pick up machine learning / new IT techniques or for data scientists/hackaton specialists to learn insurance? And how to protect the actuaries from the new legal responsabilities coming and warrant the quality of their work? This presentation aims at proposing some answers/trends.
A simple formula for non-discriminatory insurance pricing is introduced. This formula is based on the assumption that certain individual (discriminatory) policyholder information is not allowed to be used for insurance pricing. The suggested procedure can be summarized as follows: First, we construct a price that is based on all available information, including discriminatory information. Thereafter, we average out the effect of discriminatory information. This averaging out is done such that discriminatory information can also not be inferred from the remaining non-discriminatory one, thus, neither allowing for direct nor for indirect discrimination.
sydney room - Life insurance
Many empirical studies confirm that individual‘s subjective mortality beliefs systematically deviate from the information given by publicly available mortality tables.
In this study, we look at the effect of subjective mortality beliefs on the perceived attractiveness of retirement products, focusing on conventional annuities (insurance provider takes the longevity risk) and tontines (a pool of policyholders shares the longevity risk). In an actuarially fair world without subjective beliefs, policyholders always prefer a secure annuity payoff to a tontine (Yaari ). We show that subjective mortality beliefs can easily reverse this result, that is tontine products are perceived more attractive than annuities.
Legislation imposes insurance companies to project their assets and liabilities. Within the setup of with-profit life insurance, we consider retrospective reserves and bonus, and we study projection of balances with and without policyholder behavior. The projection resides in a system of ordinary differential equations of the savings account and the surplus, and we include the policyholder behavior options surrender and conversion to free-policy. The inclusion results in a structure where a system of ordinary differential equations of the savings account and the surplus is non-trivial. We consider a case, where we are able to find accurate ordinary differential equation and suggest an approximation method to project the savings account and the surplus including policyholder behavior in general.
This article proposes an optimal and robust methodology for model selection. The model of interest is a parsimonious alternative framework for modeling the stochastic dynamics of mortality improvement rates introduced by Doukhan et al. (2017). The approach models mortality improvements using a random field specification with a given causal structure instead of the commonly used factor-based decomposition framework. It captures some well documented stylized facts of mortality behavior: dependencies among adjacent cohorts, the cohort effects, cross generations correlations and the conditional heteroskedasticity of mortality. Such a class of models is a generalization of the now widely used AR-ARCH models for univariate processes. The framework being general, Doukhan et al. (2017) investigate and illustrate a simple variant, called the three-level memory model. However, it is not clear which is the best parametrization to use for specific mortality uses. In this paper, we investigate the optimal model choice and parameter selection among potential and candidate models. More formally, we propose a methodology well-suited to such a random field able to select the best model in the sense that the model is not only correct but also most economical among all the correct models. Formally, we show that a criterion based on a penalization of the log-likelihood, e.g. the using the Bayesian Information Criterion, is consistent. Finally, we investigate the methodology based on Monte-Carlo experiments as well as real-world datasets.