Lorenzo Contento, PhD: No financial relationships to disclose
Description of session (include background & scientific importance): Covariate modelling aims to explain part of the inter-subject variability in the model parameters by using covariates. It improves simulation performance by creating more faithful synthetic populations that take into account the required covariate mix, which in turn can lead to better experimental designs. By lowering the amount of unexplained variability it also makes predictions based on early data more precise, which reduces the number of observations that have to be gathered for each subject. The distribution of the model parameters given the covariates is still commonly assumed to be Gaussian. The Gaussian distribution has a single mode and only allows for linear correlations, which can be a limiting factor if model parameters cannot be precisely recovered from the covariates, e.g., if they depend on some hidden quantities that cannot be measured. For example, non-linear correlations and multiple modes linked to different subpopulations (e.g., responders vs non-responders) may occur, which are difficult to capture using standard distributions. To overcome this limitation, we propose to learn this distribution directly from the data using a conditional normalizing flow. Normalizing flows exploit the expressivity of neural networks to learn any distribution one can sample from. They are more efficient than traditional non-parametric approaches which require a copy of the original data and/or are more strongly impacted by the curse of dimensionality. Using synthetic data, we compare the performance of our approach with other methods. In particular, we look at simulation performance (e.g., VPCs), experimental design and predictions based on early data.
Learning Objectives:
describe more precisely the unexplained variability of model parameters given the values of the covariates, taking into account non-linear correlations and multimodality; improve the performance of model simulations, experimental design workflows and predictions based on early data.