Tracking Extra Quantities

Often, there are quantities in models that we might be interested in viewing the values of, but which are not random variables in the model that are explicitly drawn from a distribution.

As a motivating example, the most natural parameterization for a model might not be the most computationally feasible. Consider the following (efficiently reparametrized) implementation of Neal’s funnel (Neal, 2003):

using Turing
setprogress!(false)

@model function Neal()
    # Raw draws
    y_raw ~ Normal(0, 1)
    x_raw ~ arraydist([Normal(0, 1) for i in 1:9])

    # Transform:
    y = 3 * y_raw
    x = exp.(y ./ 2) .* x_raw
    return nothing
end
[ Info: [Turing]: progress logging is disabled globally
Neal (generic function with 2 methods)

In this case, the random variables exposed in the chain (x_raw, y_raw) are not in a helpful form — what we’re after are the deterministically transformed variables x and y.

There are two ways to track these extra quantities in Turing.jl.

Using := (during inference)

The first way is to use the := operator, which behaves exactly like = except that the values of the variables on its left-hand side are automatically added to the chain returned by the sampler. For example:

@model function Neal_coloneq()
    # Raw draws
    y_raw ~ Normal(0, 1)
    x_raw ~ arraydist([Normal(0, 1) for i in 1:9])

    # Transform:
    y := 3 * y_raw
    x := exp.(y ./ 2) .* x_raw
end

sample(Neal_coloneq(), NUTS(), 1000)
Info: Found initial step size
  ϵ = 1.6
Chains MCMC chain (1000×32×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 7.16 seconds
Compute duration  = 7.16 seconds
parameters        = y_raw, x_raw[1], x_raw[2], x_raw[3], x_raw[4], x_raw[5], x_raw[6], x_raw[7], x_raw[8], x_raw[9], y, x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse    ess_bulk   ess_tail      rhat       Symbol   Float64   Float64   Float64     Float64    Float64   Float64    ⋯

       y_raw    0.0368    0.9757    0.0254   1493.2877   899.3448    1.0126    ⋯
    x_raw[1]   -0.0361    0.9478    0.0274   1188.9118   670.7257    1.0044    ⋯
    x_raw[2]    0.0049    0.9986    0.0266   1407.5027   706.3831    1.0021    ⋯
    x_raw[3]   -0.0188    1.0110    0.0299   1140.6329   816.5882    1.0000    ⋯
    x_raw[4]   -0.0034    1.0300    0.0292   1240.9278   837.8248    0.9997    ⋯
    x_raw[5]    0.0038    1.0238    0.0237   1874.6691   881.7711    1.0010    ⋯
    x_raw[6]   -0.0210    1.0019    0.0264   1429.9060   802.6860    0.9993    ⋯
    x_raw[7]   -0.0114    0.9919    0.0244   1637.3895   765.4260    0.9995    ⋯
    x_raw[8]    0.0262    1.0662    0.0291   1340.7217   708.0435    1.0013    ⋯
    x_raw[9]   -0.0071    0.9481    0.0272   1232.8924   785.8980    0.9994    ⋯
           y    0.1104    2.9271    0.0761   1493.2877   899.3448    1.0126    ⋯
        x[1]   -0.1569    6.0110    0.2311    858.9863   766.1633    1.0026    ⋯
        x[2]   -0.2275    6.7924    0.2171    913.2291   689.8085    1.0005    ⋯
        x[3]   -0.1377    7.1633    0.2214   1044.7285   820.4943    0.9994    ⋯
        x[4]    0.0214    6.2331    0.2324    806.5543   683.2173    0.9998    ⋯
        x[5]   -0.2719    6.9901    0.2232    927.7552   739.5350    1.0005    ⋯
        x[6]   -0.0887    5.7286    0.2198   1080.7246   671.8874    0.9992    ⋯
      ⋮           ⋮         ⋮         ⋮          ⋮          ⋮          ⋮       ⋱
                                                     1 column and 3 rows omitted

Quantiles
  parameters       2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol    Float64   Float64   Float64   Float64   Float64

       y_raw    -1.7581   -0.6412    0.0701    0.6970    1.9640
    x_raw[1]    -1.8605   -0.6922   -0.0619    0.6249    1.7125
    x_raw[2]    -1.9069   -0.6752   -0.0238    0.6707    1.8848
    x_raw[3]    -2.0461   -0.7362    0.0194    0.6660    1.8404
    x_raw[4]    -2.0498   -0.6633   -0.0158    0.6493    2.0251
    x_raw[5]    -1.8990   -0.7097   -0.0050    0.6616    2.0294
    x_raw[6]    -1.9461   -0.7228   -0.0432    0.6645    1.8821
    x_raw[7]    -2.0208   -0.6626   -0.0024    0.6466    1.9816
    x_raw[8]    -2.0464   -0.7010   -0.0026    0.7874    2.1244
    x_raw[9]    -1.7434   -0.6299   -0.0415    0.5949    1.9057
           y    -5.2744   -1.9236    0.2102    2.0911    5.8920
        x[1]    -9.0630   -0.6578   -0.0336    0.4767    9.7759
        x[2]   -10.6352   -0.6269   -0.0108    0.6064    9.3749
        x[3]   -10.4021   -0.6774    0.0127    0.5651    8.5748
        x[4]    -9.8735   -0.5631   -0.0057    0.6342   11.5052
        x[5]   -11.9921   -0.6854   -0.0029    0.5432    9.2096
        x[6]   -11.0783   -0.7270   -0.0142    0.4854    9.5518
      ⋮           ⋮          ⋮         ⋮         ⋮         ⋮
                                                   3 rows omitted

Using returned (post-inference)

Alternatively, one can specify the extra quantities as part of the model function’s return statement:

@model function Neal_return()
    # Raw draws
    y_raw ~ Normal(0, 1)
    x_raw ~ arraydist([Normal(0, 1) for i in 1:9])

    # Transform and return as a NamedTuple
    y = 3 * y_raw
    x = exp.(y ./ 2) .* x_raw
    return (x=x, y=y)
end

chain = sample(Neal_return(), NUTS(), 1000)
Info: Found initial step size
  ϵ = 1.6
Chains MCMC chain (1000×22×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 1.47 seconds
Compute duration  = 1.47 seconds
parameters        = y_raw, x_raw[1], x_raw[2], x_raw[3], x_raw[4], x_raw[5], x_raw[6], x_raw[7], x_raw[8], x_raw[9]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse    ess_bulk   ess_tail      rhat       Symbol   Float64   Float64   Float64     Float64    Float64   Float64    ⋯

       y_raw   -0.0219    0.9474    0.0271   1214.9778   900.6339    1.0003    ⋯
    x_raw[1]    0.0176    0.9917    0.0304   1059.6758   701.2660    1.0018    ⋯
    x_raw[2]   -0.0243    0.9868    0.0252   1552.6142   757.1968    0.9994    ⋯
    x_raw[3]   -0.0000    1.0009    0.0260   1484.2102   939.2902    0.9991    ⋯
    x_raw[4]    0.0181    1.0094    0.0310   1062.7757   703.4188    0.9998    ⋯
    x_raw[5]    0.0269    0.9493    0.0265   1263.3237   584.3123    1.0018    ⋯
    x_raw[6]    0.0155    1.0508    0.0298   1236.1668   708.3744    1.0026    ⋯
    x_raw[7]    0.0181    0.9974    0.0306   1074.5917   771.6962    1.0011    ⋯
    x_raw[8]    0.0322    0.9854    0.0316    966.1882   843.7002    0.9998    ⋯
    x_raw[9]   -0.0257    0.9744    0.0344    807.6057   799.1664    1.0063    ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

       y_raw   -1.8581   -0.6714   -0.0091    0.5911    1.8157
    x_raw[1]   -2.0268   -0.6383    0.0546    0.6636    1.9508
    x_raw[2]   -1.8714   -0.7199   -0.0018    0.6422    1.8891
    x_raw[3]   -1.8869   -0.6519    0.0130    0.6770    1.9338
    x_raw[4]   -1.8775   -0.6646    0.0092    0.7116    1.9957
    x_raw[5]   -1.9749   -0.6176    0.0281    0.6672    1.9179
    x_raw[6]   -2.0109   -0.6960   -0.0410    0.7359    2.0653
    x_raw[7]   -1.9488   -0.6998    0.0180    0.6828    1.9050
    x_raw[8]   -1.8551   -0.6417    0.0347    0.7206    2.0340
    x_raw[9]   -1.8486   -0.6982   -0.0490    0.6562    1.8030

The sampled chain does not contain x and y, but we can extract the values using the returned function. Calling this function outputs an array:

nts = returned(Neal_return(), chain)
1000×1 Matrix{@NamedTuple{x::Vector{Float64}, y::Float64}}:
 (x = [0.780804018672555, 0.36671669189238987, -0.9675291294669861, -0.6914276386032197, 0.8526355903638551, -0.05189466797704134, 0.01148593322774967, -0.1706878906767787, -0.770108471402175], y = -0.8927144876660456)
 (x = [-0.20458224827356, -0.40706442220282085, 0.5735018819230797, 0.3326661103439895, -0.7457196947945319, -0.22875160239588985, 0.3246570121885756, -0.3399089355087911, 0.5146772129325653], y = -1.0304309907547822)
 (x = [0.2806932021489665, -0.09632527943421079, 0.3902458343433957, 0.12490732443945954, -0.03816513355344026, -0.05411336972761893, 0.023492584132907293, 0.038485041628172484, -0.3171906000328188], y = -2.990929964408709)
 (x = [0.28513328501738244, 0.10016821462051444, -0.09087542860932833, 0.09185154953209333, -0.11070140838664491, 0.01649702051960157, 0.1209756643311384, 0.18342179611166498, -0.20298605574297968], y = -3.1275977233247962)
 (x = [-0.9613243564720262, -0.27672376434209, 0.2150992012559899, 0.1868924805886456, 0.26612884988446384, -0.1770475133977823, -0.054942363357576185, 0.016702797839447444, 0.5305292035093443], y = -1.4528254843244193)
 (x = [-0.085265554261034, 0.09600528735436081, -0.14682180175146375, 0.20325937386994208, 0.049604990899146446, 0.16543877206842314, -0.30351226325976427, -0.08970377636211208, -0.0437627577404895], y = -3.2742809191817597)
 (x = [-0.20639051400625902, 0.20904954473768247, -0.1225319108361128, 0.3096321110090707, -0.04632579584509106, -0.03117941687083077, -0.31242189740941667, -0.1314454220822909, -0.25164224876479796], y = -3.7258335923850536)
 (x = [0.8111746736466581, -0.8784946304645654, 0.11091788854097621, -0.24888850424169498, -0.8902640180146602, 1.2672666072302172, -0.8175532151244481, 0.05568136349615955, -1.771172301687844], y = -0.7930522339289877)
 (x = [2.6517078114398327, 2.3092298333397903, -0.23198407388232067, 1.9682893277945612, -1.1862760823306995, -1.8078336703772389, -2.8223978290530667, 2.5028791764610934, 0.5914183760898305], y = 0.8710145964587437)
 (x = [0.23917545900002243, -0.05451910457838288, 0.14480341460567986, 0.7141128406469788, -0.5056545226934145, -0.32115034388191915, 0.002209689849239694, 0.4336129944888335, -0.1795148524163962], y = -2.1818054094550194)
 ⋮
 (x = [-0.013228758656127357, -0.0420884103453206, 0.1685107708480854, 0.110887949094864, 0.22972491355535837, 0.03394398215413634, 0.014846222443168099, -0.10395288809936823, 0.15918473108306144], y = -4.672227227080943)
 (x = [-3.0309254457800408, -2.960317076781767, 0.9054196453867444, -2.8715725228360203, -0.06586216512285975, -1.2762824364240761, -3.1851629850622256, 0.11780545206727742, -0.013883298531069493], y = 2.5773051960130364)
 (x = [-16.818576973571822, 29.13038961793242, -19.275425602487424, -3.4489082633729695, -17.611181716254098, 12.567775774027488, 2.208849972544328, -9.114297225534104, -10.349801450987265], y = 6.018563925283852)
 (x = [0.008675943371756816, -0.036966773621600856, 0.022529079048443732, -0.0038934405293482546, 0.021871025747151322, -0.013256613491789405, -0.02120328973257597, 0.012551929236787948, -0.03645433450296797], y = -7.432342083861056)
 (x = [-0.05516335758573229, -0.14918304721456424, 0.10698051638417173, -0.18381715592698752, 0.17171631097429865, -0.18192719831051782, 0.05374941184623288, 0.010520910542585348, -0.16655566005520092], y = -4.295958239999082)
 (x = [0.07782673749469453, -0.30300420904233516, 0.13275575565545128, -0.22775488296847513, -0.08061149993705202, -0.19765964970623545, -0.2709061817023404, -0.14525725447798377, 0.4444855391649469], y = -2.9785534861439507)
 (x = [0.7375000653942412, -0.5493051105510502, -0.15084596628274474, -0.9152112056758807, -0.14290370230616836, 0.06403195603145237, -0.4473970421430034, -0.4359716579339583, 0.6720646788828362], y = -0.9943939354110027)
 (x = [-6.173465286395937, 3.7549887041995085, -0.8412856988803715, 8.092117922641547, -0.7481860188642706, -5.138087535550119, 2.866691810981509, 4.009913831381422, 1.5064548310917045], y = 2.765078039108201)
 (x = [0.14718623738084569, -0.017100825942496782, -0.4397859723367684, -0.051512550484016005, -0.20867454674312388, 0.1519261174466234, 0.08821771038870185, 0.10178427908271903, 0.10166838865991873], y = -3.2124196479905294)

where each element of which is a NamedTuple, as specified in the return statement of the model.

nts[1]
(x = [0.780804018672555, 0.36671669189238987, -0.9675291294669861, -0.6914276386032197, 0.8526355903638551, -0.05189466797704134, 0.01148593322774967, -0.1706878906767787, -0.770108471402175], y = -0.8927144876660456)

Which to use?

There are some pros and cons of using returned, as opposed to :=.

Firstly, returned is more flexible, as it allows you to track any type of object; := only works with variables that can be inserted into an MCMCChains.Chains object. (Notice that x is a vector, and in the first case where we used :=, reconstructing the vector value of x can also be rather annoying as the chain stores each individual element of x separately.)

A drawback is that naively using returned can lead to unnecessary computation during inference. This is because during the sampling process, the return values are also calculated (since they are part of the model function), but then thrown away. So, if the extra quantities are expensive to compute, this can be a problem.

To avoid this, you will essentially have to create two different models, one for inference and one for post-inference. The simplest way of doing this is to add a parameter to the model argument:

@model function Neal_coloneq_optional(track::Bool)
    # Raw draws
    y_raw ~ Normal(0, 1)
    x_raw ~ arraydist([Normal(0, 1) for i in 1:9])

    if track
        y = 3 * y_raw
        x = exp.(y ./ 2) .* x_raw
        return (x=x, y=y)
    else
        return nothing
    end
end

chain = sample(Neal_coloneq_optional(false), NUTS(), 1000)
Info: Found initial step size
  ϵ = 1.6
Chains MCMC chain (1000×22×1 Array{Float64, 3}):

Iterations        = 501:1:1500
Number of chains  = 1
Samples per chain = 1000
Wall duration     = 1.43 seconds
Compute duration  = 1.43 seconds
parameters        = y_raw, x_raw[1], x_raw[2], x_raw[3], x_raw[4], x_raw[5], x_raw[6], x_raw[7], x_raw[8], x_raw[9]
internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size

Summary Statistics
  parameters      mean       std      mcse    ess_bulk   ess_tail      rhat       Symbol   Float64   Float64   Float64     Float64    Float64   Float64    ⋯

       y_raw   -0.0012    0.9645    0.0307    991.6383   708.8573    0.9990    ⋯
    x_raw[1]   -0.0091    0.9769    0.0283   1190.6282   739.5350    0.9991    ⋯
    x_raw[2]    0.0355    0.9942    0.0276   1304.3098   786.8295    1.0012    ⋯
    x_raw[3]    0.0258    0.9392    0.0275   1185.0749   793.4008    1.0008    ⋯
    x_raw[4]    0.0344    0.9777    0.0279   1238.5146   862.8499    0.9992    ⋯
    x_raw[5]   -0.0239    0.9668    0.0344    767.6990   760.7764    1.0021    ⋯
    x_raw[6]    0.0295    0.9815    0.0278   1250.8573   874.2977    1.0008    ⋯
    x_raw[7]    0.0169    0.9468    0.0314    900.9194   763.7200    0.9992    ⋯
    x_raw[8]   -0.0332    0.9836    0.0291   1125.5328   745.6237    0.9991    ⋯
    x_raw[9]   -0.0048    0.9632    0.0243   1571.4905   541.4275    1.0093    ⋯
                                                                1 column omitted

Quantiles
  parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      Symbol   Float64   Float64   Float64   Float64   Float64

       y_raw   -1.8502   -0.6544    0.0214    0.5912    1.9484
    x_raw[1]   -1.8245   -0.7153    0.0153    0.6439    1.8994
    x_raw[2]   -1.9624   -0.6119    0.0353    0.7032    1.9432
    x_raw[3]   -1.7436   -0.6256    0.0459    0.6534    1.9303
    x_raw[4]   -1.8735   -0.6277    0.0532    0.7051    1.9086
    x_raw[5]   -1.8321   -0.7225   -0.0139    0.6126    1.9221
    x_raw[6]   -1.9152   -0.6083    0.0445    0.6283    1.9949
    x_raw[7]   -1.8542   -0.5990    0.0381    0.6286    1.8823
    x_raw[8]   -1.9140   -0.6835   -0.0648    0.5905    1.9290
    x_raw[9]   -1.8072   -0.6796   -0.0279    0.6692    1.8125

The above ensures that x and y are not calculated during inference, but allows us to still use returned to extract them:

returned(Neal_coloneq_optional(true), chain)
1000×1 Matrix{@NamedTuple{x::Vector{Float64}, y::Float64}}:
 (x = [0.4873720353151997, 0.01342104124284014, -0.18258035992280544, -0.01449463695949217, -0.01712193228403781, -0.2615903458248041, -0.28302510683305376, -0.2602679079921507, 0.02669106030205712], y = -2.422654867137844)
 (x = [2.038371315610481, -2.095614121749677, -0.28349180755706455, -0.8702149681266681, -0.5999102257002429, -0.31949995596520947, -1.5531363783942307, -0.27892467136984817, -0.8334462434938362], y = 0.08038395392369446)
 (x = [-1.0221874388603862, 2.5745407758323133, 0.6703858461075768, 1.006061840206697, 0.9610029888500635, 0.7820455728200704, 2.1215958985346117, 0.6361399627724154, 0.8282277376221668], y = 0.3613632318067681)
 (x = [0.9847764231783616, -1.187755045743322, -0.4192852508609086, -0.4966236692414811, 0.9479777013113188, -0.45116153936746667, 0.01192374681055855, 0.04574094028876272, 0.36137688920286626], y = -0.7375929183182814)
 (x = [-0.22342598530769792, 0.8264293599455222, -3.1918977426654336, 0.526482497696655, -0.6107521710770457, 0.6469065575890981, 0.014373211773820012, -3.4518204548145874, -0.859732277684122], y = 0.4405702083915634)
 (x = [0.3413424662766032, -1.4545754904927992, 0.7465540604176304, 0.025612288229667633, 1.1597549615161238, -1.1155517839805824, -0.9231636693615496, 3.0320777354987727, -0.028729747678146466], y = 0.04908717505342769)
 (x = [0.3965536359867939, -0.09251781065054335, 0.27896448713753186, 0.12973290228613527, 0.5538207060547197, -0.5236280136843662, -0.08275695827694325, -0.12061881523703197, 0.22430829689044982], y = -2.5524264885345556)
 (x = [2.8231865842870008, 1.3933863744652975, -12.179490470823401, 7.315630855536224, -3.875524443601507, 2.966566637710961, -2.105138750076685, 1.3200399356559853, 4.81104757215425], y = 3.2033036659858185)
 (x = [-0.11703612704763816, -0.09667386305828207, -0.39628677887748587, -0.008237286173238698, 0.5856108498342999, 0.05327151500269577, -0.29197587227907373, 0.2861480173549966, 0.3395970259070481], y = -2.050051147665855)
 (x = [-0.36063402555532154, 2.2618542791480425, 2.618723369324851, 1.1988778098844997, -4.525857236740389, -1.6455171941705977, 2.215427541402753, -1.209814828320934, 1.4486794702995367], y = 2.045262449744928)
 ⋮
 (x = [0.21531123117509443, -0.08421704182278261, -0.16840765566401256, 0.059469348827523985, -0.2380270220236725, 0.1283217829646645, -0.22784260707440113, 0.21676354465943914, 0.15487957949844708], y = -4.286446600130725)
 (x = [-8.102795125178073, 7.298626398766161, 10.111760142403138, -3.4474675387156415, 6.859205819351766, -5.243173736300095, 0.598021672704385, -7.631669504170606, -10.3226686322096], y = 4.003934478230173)
 (x = [-0.025631865990387748, 0.06835494083551245, 0.09361236286931102, -0.21338559089242803, 0.17993436328629875, 0.04919990926580928, -0.23515915707910845, 0.17227339049170803, -0.015374057036511036], y = -3.887068251853745)
 (x = [0.5477668836907912, -9.971119367412882, 0.034647951022238256, 8.047823250149522, -7.092965361244821, -2.8031483240398365, -3.8085711686974366, -6.423576121950207, 1.9146807681244589], y = 3.326383249382836)
 (x = [-0.2793550061428327, 0.5625958236897364, -0.14201349850290026, -0.5616710492642616, 0.5809861203194013, 0.6193766132850003, 0.3605021167911534, 0.3536355464117559, -0.536553150505042], y = -1.645370835827549)
 (x = [-0.2760368408696536, 0.37704588615443185, 0.11642492844500747, -0.5518643002546505, 0.9065638301611354, 0.8885687143580647, 0.08351777389253076, 0.38267686643909377, 0.036344428156103954], y = -1.6173382807761207)
 (x = [-0.7760892696420877, 0.3461794242778, -2.247084909643931, -0.9762215255775478, 1.6684638211195044, 1.3054614199037826, 0.1498566382007964, -0.3892422130902891, -0.9776660942707378], y = 0.2693603370326505)
 (x = [0.4368135856977221, 1.6756095830039204, -0.6210610786024364, 0.582716990520637, 0.0987693906942709, 1.0730782994313053, -0.1892225827618058, -0.8926827836685423, 0.45993916748040675], y = -0.15372990024543498)
 (x = [0.08041134095982821, 0.19575388778394054, 0.019489420929866892, 1.3730421492113192, 0.09461521939855624, -0.19144808957383969, -0.38168102416189337, 0.5820392724126038, -0.3179074148229471], y = -1.1395331059145453)

Another equivalent option is to use a submodel:

@model function Neal()
    y_raw ~ Normal(0, 1)
    x_raw ~ arraydist([Normal(0, 1) for i in 1:9])
    return (x_raw=x_raw, y_raw=y_raw)
end

chain = sample(Neal(), NUTS(), 1000)

@model function Neal_with_extras()
    neal ~ to_submodel(Neal(), false)
    y = 3 * neal.y_raw
    x = exp.(y ./ 2) .* neal.x_raw
    return (x=x, y=y)
end

returned(Neal_with_extras(), chain)
Info: Found initial step size
  ϵ = 1.6
1000×1 Matrix{@NamedTuple{x::Vector{Float64}, y::Float64}}:
 (x = [-0.19657487244254823, -0.9474177580258429, -0.770774446897981, 0.7740336503563247, 0.6148563050642828, -0.2503008659875269, 0.2626300042428014, -0.21005070635241005, 1.2970719615955515], y = 0.11708534912418522)
 (x = [0.21524754171725544, 0.2359454843510407, 0.19538124755398587, -0.32881785199917957, -0.5216692361778424, 0.2057852413901789, -0.2848733491736807, 0.20375441891865326, -0.647228028947064], y = -1.2475012778676329)
 (x = [-1.1496664114828097, 0.2039317988682675, 0.8125418358089641, 0.4509536476710983, 1.7212104978919902, -0.3979753838146655, -0.0921247667112899, -1.522901617552012, 1.9576041637089745], y = 0.5967235201819756)
 (x = [-2.538511157862421, 0.5816176658483992, -0.16828525559196053, 3.2436869951927245, -2.859286503649795, 1.26020712609257, 0.86323904878544, -1.3480071324289464, -1.2008926173265728], y = 1.2357063080454471)
 (x = [0.25247039216048794, 0.9415207930641828, -1.6296954730964162, 1.6770701781802912, -3.2704262212862005, -0.7051090885936456, -0.13529062369981595, 0.8054758260730189, 1.583607560575377], y = 1.6204840155181883)
 (x = [0.013964416607607508, -0.04577866161974515, 0.22599017158850876, 0.20589904917868412, 0.09017471394033892, 0.08058051603470524, -0.10068106661552956, -0.07089674117826501, -0.03264692879764548], y = -4.083348566247646)
 (x = [-0.2123971604484253, -0.01953108475903124, 0.141591217493728, 0.3273849379550891, 0.15529382833197306, 0.051051447949263556, -0.1806888877372773, -0.056965328347919154, 0.26066193761429474], y = -3.213902342743335)
 (x = [10.016401226398454, 0.09534479197848815, -8.653984514856093, -2.343833768270758, -6.493546757841682, -2.4976317822952288, 4.993790691523128, 3.6796856405442497, -6.666412758316671], y = 3.8331256500476028)
 (x = [-0.5881818879746895, 3.303393892147707, 0.3114335479718548, 1.2327224334954747, -0.2883684430212568, 0.43399408282995744, -2.3851788841293224, -2.287086982368246, 0.6066408669523751], y = 0.7369952014326346)
 (x = [14.080946425774691, 25.36279128145031, 33.99562022491606, -0.5462392549748315, -25.218245108955195, 36.88791333124625, 14.859241892311164, -36.14388885096578, -20.338803047229494], y = 6.55317778381524)
 ⋮
 (x = [-0.20404398324510575, -0.1724776539797675, -0.23593826038201463, -0.29278073406241767, 0.09956681054567747, 0.2231315645496126, 0.1619316812672117, -0.3113221158306133, 0.1166698758846848], y = -2.529220170615389)
 (x = [0.837920142269007, 2.5763214768880514, 3.9114850170310334, 7.519255539075549, -1.0303832811759572, -2.9317857144674835, -0.17563953049735553, 2.6566184064663503, -0.596105685840523], y = 2.4530681828672845)
 (x = [-1.6684581699307566, 0.31436899799123424, -0.23108075855702262, -0.10193017122317134, 0.39965032905919967, 0.38165731734792674, -0.8586152804600348, -0.37579932063875104, -0.19832711223612226], y = -1.496337987976701)
 (x = [19.66720212396965, 3.6681111185800974, -6.1932245135845445, -7.352489668367346, 1.137285471858576, -8.24838449637217, 4.764205544087361, 0.19396312537735183, -4.816669735419467], y = 4.130905893963288)
 (x = [0.28521243714804073, 0.11824370881987625, -0.3236869227227385, -0.24696277717486537, -0.3476678983612236, 0.19443093587888807, 0.35175121490379613, 0.19085138137807126, 0.22643558128431177], y = -2.5979560013590097)
 (x = [0.046005546940453564, 0.12551086142257104, -0.20835307949208742, -0.19375980547696756, -0.14365249616089656, 0.028226166870575813, 0.10191564053856128, 0.19550621716864106, 0.0890305822269848], y = -3.492593516449147)
 (x = [-0.42257317200600014, -0.4057868997801819, -0.7800992607521872, 0.0629016841276293, 0.06287651902886161, -0.17113809775914415, -0.2882139098475776, 0.8408314003295937, -0.395059679133044], y = -1.330427904635018)
 (x = [-0.040599433066387476, 0.8265980999676865, 1.1583285214324415, -0.5888116859574852, -0.7845014788836039, -0.5120956375636492, -1.2592565045605535, -2.077824943315352, -0.33948072970083265], y = -0.00492468863325074)
 (x = [0.44067455328200617, -0.0014614630711342258, 0.39954796401470843, -0.10279853434813442, 0.12248298559616065, 0.3252892117573872, 0.0746579993485408, -0.6152578356980377, 0.03528504383472096], y = -2.4691331142007122)

Note that for the returned call to work, the Neal_with_extras() model must have the same variable names as stored in chain. This means the submodel Neal() must not be prefixed, i.e. to_submodel() must be passed a second parameter false.

Back to top