Virtual Population: Hierarchical MCMC

The problem of virtual population generation has a natural hierarchical structure. As depicted in the following figure, virtual patients belong to a virtual population. In a probabilistic sense, we can say that the parameters of virtual patients are samples, generated from population distributions.

Hierarchical model structure

Hierarchical MCMC methods are a family of approximate Bayesian inference methods that explicitly contain a hierarchical structure, like the one in the figure. When using Hierarchical MCMC inference in Pumas-QSP, population-level parameters are inferred at the same time as patient-level parameters and other noise parameters. The population-level parameters are the ones that define the population distributions, out of which virtual patients are sampled. By approximating the posterior distribution of population-level parameters accurately, Pumas-QSP can generate any arbitrary number of virtual patients in the same amount of time.

The following tutorial contains all steps for generating a virtual population by hierarchical MCMC inference.

Julia environment

For this tutorial we will need the following packages:

ModuleDescription
PumasQSPPumasQSP is used to formulate the QSP workflow.
ModelingToolkitThe symbolic modeling environment
DifferentialEquationsThe numerical differential equation solvers
DataSetsWe will load our experimental data from datasets on JuliaHub.
DataFramesHandling tabular data manipulation
TuringBayesian inference with probabilistic programming
Plots and StatsPlotsThe plotting and visualization libraries used for this tutorial.

First we prepare the environment by listing the packages we are using within the example.

# Packages used for this example
using PumasQSP
using ModelingToolkit
using DifferentialEquations
using DataSets
using DataFrames
using CSV
using Turing
using Plots
using StatsPlots

Model setup

As the example model we are using the Hires model with eight variables. Their dynamics are described by eight equations and default values are set for the variables and parameters. We use the ModelingTookit.jl package to specify the model in Julia.

# Model states and their default values
@variables begin
    t
    y1(t) = 1.0
    y2(t) = 0.0
    y3(t) = 0.0
    y4(t) = 0.0
    y5(t) = 0.0
    y6(t) = 0.0
    y7(t) = 0.0
    y8(t) = 0.0057
end

# Model parameters and their default values
@parameters begin
    k1 = 1.71
    k2 = 280.0
    k3 = 8.32
    k4 = 0.69
    k5 = 0.43
    k6 = 1.81
end

# Differential operator
D = Differential(t)

# Differential equations
eqs = [
    D(y1) ~ (-k1*y1 + k5*y2 + k3*y3 + 0.0007),
    D(y2) ~ (k1*y1 - 8.75*y2),
    D(y3) ~ (-10.03*y3 + k5*y4 + 0.035*y5),
    D(y4) ~ (k3*y2 + k1*y3 - 1.12*y4),
    D(y5) ~ (-1.745*y5 + k5*y6 + k5*y7),
    D(y6) ~ (-k2*y6*y8 + k4*y4 + k1*y5 - k5*y6 + k4*y7),
    D(y7) ~ (k2*y6*y8 - k6*y7),
    D(y8) ~ (-k2*y6*y8 + k6*y7)
]

# Model definition
@named model = ODESystem(eqs)

\[ \begin{align} \frac{\mathrm{d} \mathrm{y1}\left( t \right)}{\mathrm{d}t} =& 0.0007 - k1 \mathrm{y1}\left( t \right) + k3 \mathrm{y3}\left( t \right) + k5 \mathrm{y2}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y2}\left( t \right)}{\mathrm{d}t} =& - 8.75 \mathrm{y2}\left( t \right) + k1 \mathrm{y1}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y3}\left( t \right)}{\mathrm{d}t} =& 0.035 \mathrm{y5}\left( t \right) - 10.03 \mathrm{y3}\left( t \right) + k5 \mathrm{y4}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y4}\left( t \right)}{\mathrm{d}t} =& - 1.12 \mathrm{y4}\left( t \right) + k1 \mathrm{y3}\left( t \right) + k3 \mathrm{y2}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y5}\left( t \right)}{\mathrm{d}t} =& - 1.745 \mathrm{y5}\left( t \right) + k5 \mathrm{y7}\left( t \right) + k5 \mathrm{y6}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y6}\left( t \right)}{\mathrm{d}t} =& k1 \mathrm{y5}\left( t \right) + k4 \mathrm{y7}\left( t \right) + k4 \mathrm{y4}\left( t \right) - k5 \mathrm{y6}\left( t \right) - k2 \mathrm{y8}\left( t \right) \mathrm{y6}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y7}\left( t \right)}{\mathrm{d}t} =& - k6 \mathrm{y7}\left( t \right) + k2 \mathrm{y8}\left( t \right) \mathrm{y6}\left( t \right) \\ \frac{\mathrm{d} \mathrm{y8}\left( t \right)}{\mathrm{d}t} =& k6 \mathrm{y7}\left( t \right) - k2 \mathrm{y8}\left( t \right) \mathrm{y6}\left( t \right) \end{align} \]

We are considering five patients within the same clinical condition. We first load timeseries data from datasets and then create Trial objects to describe each patient.

N_subjects = 5

# Model solve options
alg = Rodas4P()
reltol = 1e-9
abstol = 1e-9

# Initialize Vector of Trials
trials = []
# Iterate over patients to create and collect Trial objects
for i = 1 : N_subjects
    # Read data from a patient
    patient_dataset = dataset("pumasqsptutorials/patient_$i")
    data = open(IO, patient_dataset) do io
        CSV.read(io, DataFrame)
    end
    # Extract timespan from "timestamp" column in data
    tspan = (first(data[!,:timestamp]), last(data[!,:timestamp]))

    # Define a Trial object for a patient
    trial = Trial(
        data,
        model;
        tspan,
        noise_priors = Exponential(0.1),
        alg,
        reltol,
        abstol
    )

    # Collect a patient's Trial object in a Vector
    push!(trials, trial)
end

Analysis

Next we define the InverseProblem. For this we need a search space, which holds lower and upper boundaries for all parameters and/or initial conditions to be inferred.

In hierarchical MCMC each parameter and/or initial condition to be inferred is assigned a prior distribution and this distribution is bounded by the respective search space bounds. These prior distributions are adaptive because the parameters that define their shape are themselves part of the inference problem instead of numbers, known as population-level parameters or hyperparameters.

So overall, in hierarchical Bayesian inference, we are approximating a posterior distribution over the QSP model parameters, the noise parameters for each patient, and the population-level parameters that define the shape of the distribution of QSP model parameters.

Pumas-QSP builds the hierarchical structure mentioned in the introduction internally, by introducing two population-level parameters for each parameter in the search space. These two parameters are the $\alpha$ and $\beta$ of a TransformedBeta distribution, which is assumed to be the population distribution of the respective parameter in the search space.

search_space = [
    y1 => (0.0, 1.3),
    k1 => (0.0, 1.9),
    k2 => (200.0, 250.0)
]

prob = InverseProblem(trials, model, search_space)
InverseProblem for model with 5 trials optimizing 3 parameters and states.

In this example, we want to infer the initial condition of y1 and the parameters k1 and k2 of the Hires model. Pumas-QSP will then generate six population-level parameters and then use these to parameterize TransformedBeta distributions and sample virtual patient parameters out of them.

Once the QSP model and all trials are defined, the package can generate potential patients, i.e. find parameter combinations for the provided model, which describe the trial data sufficiently well. Particularly in the case of hierarchical MCMC methods, the posterior distribution of the population-level parameters given the data will be approximated in a sampling-based manner.

# Set the hierarchical keyword argument to true for hierarchical inference
alg = MCMCOpt(maxiters=100, hierarchical = true)

# Generate potential population
vp = vpop(prob, alg; population_size = 200)
MCMC result of length 200 computed in 19 minutes and 23 seconds.
┌──────────┬──────────┬─────────┐
│    y1(t) │       k1 │      k2 │
├──────────┼──────────┼─────────┤
│  1.29749 │  1.22143 │ 219.586 │
│ 0.402332 │ 0.772955 │ 244.088 │
│ 0.637573 │ 0.965496 │ 228.132 │
│   1.2577 │  1.01553 │ 207.033 │
│ 0.160444 │ 0.962147 │ 246.554 │
│ 0.132817 │  1.33632 │  204.34 │
│  1.27376 │ 0.521806 │ 249.667 │
│  1.29612 │ 0.683937 │ 228.995 │
│  1.08914 │ 0.801522 │ 218.883 │
│ 0.577023 │  1.00244 │ 227.493 │
│ 0.452474 │ 0.973322 │ 216.842 │
│  1.20764 │ 0.765543 │ 228.191 │
│  1.01498 │ 0.833677 │ 247.982 │
│  1.10809 │ 0.739599 │ 249.999 │
│ 0.215572 │ 0.512354 │ 222.297 │
│  1.29661 │ 0.503119 │ 215.854 │
│    ⋮     │    ⋮     │    ⋮    │
└──────────┴──────────┴─────────┘
                 184 rows omitted

Finally, the original MCMCChain object that was created during Bayesian inference can be accessed via

vp.original
Chains MCMC chain (100×23×1 Array{Float64, 3}):

Iterations        = 1:1:100
Number of chains  = 1
Samples per chain = 100
Wall duration     = 1151.13 seconds
Compute duration  = 1151.13 seconds
parameters        = α_y1(t), α_k1, α_k2, β_y1(t), β_k1, β_k2, noise[1], noise[2], noise[3], noise[4], noise[5]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   e ⋯
      Symbol   Float64   Float64   Float64    Float64    Float64   Float64     ⋯

     α_y1(t)    1.1150    0.0031    0.0027     1.5082    21.0246    1.9241     ⋯
        α_k1    2.8424    0.0027    0.0013     4.0006    12.7391    1.1844     ⋯
        α_k2    0.7727    0.0014    0.0006     6.6863     6.6729    1.5767     ⋯
     β_y1(t)    0.5915    0.0021    0.0018     1.4864    10.3450    1.9655     ⋯
        β_k1    3.2781    0.0046    0.0016    10.9796     6.5693    1.0670     ⋯
        β_k2    0.4726    0.0014    0.0011     1.6328    12.7391    1.7568     ⋯
    noise[1]    0.0001    0.0000    0.0000     1.5391    21.0246    1.9616     ⋯
    noise[2]    0.0000    0.0000    0.0000     1.9497    21.0246    1.5237     ⋯
    noise[3]    0.0000    0.0000    0.0000    11.0197    13.1912    1.0655     ⋯
    noise[4]    0.0001    0.0000    0.0000    12.2259    15.1573    1.0778     ⋯
    noise[5]    0.0000    0.0000    0.0000     9.7457     8.7220    1.1288     ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

     α_y1(t)    1.1102    1.1124    1.1135    1.1184    1.1196
        α_k1    2.8366    2.8408    2.8430    2.8443    2.8478
        α_k2    0.7704    0.7715    0.7730    0.7735    0.7756
     β_y1(t)    0.5883    0.5899    0.5923    0.5932    0.5942
        β_k1    3.2684    3.2755    3.2797    3.2813    3.2842
        β_k2    0.4703    0.4716    0.4722    0.4735    0.4755
    noise[1]    0.0001    0.0001    0.0001    0.0001    0.0001
    noise[2]    0.0000    0.0000    0.0000    0.0000    0.0000
    noise[3]    0.0000    0.0000    0.0000    0.0000    0.0000
    noise[4]    0.0001    0.0001    0.0001    0.0001    0.0001
    noise[5]    0.0000    0.0000    0.0000    0.0000    0.0000

This MCMCChain object does not contain the patient-level parameters, as these are not used to generate virtual patients. The population-level parameters, $\alpha$ and $\beta$, for each model parameter (k1 and k2) and initial condition (y1) are included as these will be used to sample virtual patients.

Additionally, the noise parameters are shown as an estimate of the measurement noise for each patient. It was assumed that the same noise parameter is applied to all states, so there is only one index for patient in noise. See the MCMC tutorial for a more elaborate choice of noise parameters that could be applied here too.

Visualization

We can conveniently analyze and visualize the results by relying on the ploting recipes of Pumas-QSP.

# Viusalization (1/2): Simulation results for Patient 2
plot(vp, trials[2], legend = :outerright, grid = "off")
# Viusalization (2/2): Parameters
params = DataFrame(vp)[!,"y1(t)"]
plt = violin(params, grid = :off, legend = :outerright, label = "VP", ylabel = "y1", color = "purple")

Simulating Virtual Trials

Once we have created and validated our VP, we can use it to explore the system for new clinical conditions. For instance, what would happen to patient 2 if the initial condition for y3 and the parameter k1 were different? For this, we can use the virtual_trial function.

# Create virtual trial
vt = virtual_trial(trials[2], model;
	                u0 = [y3 =>2.1],
	                params = [k1 => 1.9],
	                tspan = (0, 100))

# Visualize simulation results
plot(vp, vt, legend = :outerright, grid = "off")