Round I

Interactive session 1

Why we focus on data quality: The steps and calculations of the transition process.
By Lieke Werner (Achmea)

Why is data quality so important at the moment of truth (the transition). In the transition process, we use data to convert benefits into personal accounts. What critical data do we use? Which calculations are required and in what order? In this session, we discuss the different steps needed in this actuarial conversion for every individual member and how we can make this repeatable and reproducible. We will also highlight some of the exceptions and challenges we are facing in this process.

Interactive session 2

To check or not to check: The challenges in the scoping of a data quality project.
By Inge Lodder (ELAVV)

With regards to today’s topic, Inge has first-hand experience in guiding data quality programs, by engaging with stakeholders to ensure thorough analysis of findings and timely delivery of extensive investigation results. In her session, adequate scoping of data quality projects will be considered. As pension funds and (life) insurers maintain extensive data warehouses, she aims to facilitate open discussions on where to perform deep dives and when to exclude specific topics from analysis. Inge is looking forward to hosting her session and hopes to see you all at the congress!

Interactive session 3

Case: How modelling helps you to a 100% score on data quality.
By Linda de Koter (Axini)

Examining the Kader Datakwaliteit presented by the Pensioen Federatie, we observe that pension funds are tasked with demonstrating the accuracy of entitlements and other pension-related information qualitatively. However, what if we could elevate this assessment to a quantitative level? Imagine recalculating the entire portfolio to ensure the precision of every data entry in the database. In this session, our focus will be on the synergy between software engineering methods and our actuarial toolkit. We will illustrate our findings through examples obtained from our analysis of some pension funds. Additionally, we examine the added value of this quantitative method over a qualitative examination.

Interactive session 4

Copilot: Data Quality engine developed by Triple A.
By Daan Nijssen & Hen Veerman (Triple A)

Concerning the data requirements in the upcoming pension transition, Triple A Risk Finance has developed a Data Quality engine, which can load, measure and report all the critical pension data elements that are essential for a smooth and governed transition. In this interactive workshop, we would like to show which data quality issues we encounter in the pension practice, how we tackle them and how the Data Quality engine supports this as a copilot.

Student session

Unpacking the WTP act and resulting opportunities for insurers.
By Goran Lapchev & Sarah Fox (a.s.r.)

Within financial reporting, outliers in data are often due to data quality problems. However, some outliers are real, and very informative data points. With the increasing granularity of data, using rule-based controls is not sufficient to verify the data. Therefore, we deploy an outlier detection approach using machine learning techniques to be able to detect errors in the data. In this break-out session, you will learn all about this technical model and the added value for actuaries.

Cancel