Why does manually programmed deviance and DIC calculated deviance differ in JAGS? -


i fitting set of 16 models in jags. have function in jags calculates log of probability of each value of outcome variable , and function takes -2 * sum of log probabilities. i.e., have custom formula calculate deviance each model. wanted check definition of deviance same jags using.

after running 5000 burnin , 5000 iterations, obtained following results: enter image description here

basically, models, deviance close not same, , other models (e.g., 7, 13, 16) vastly different.

why deviance calculated using custom formula different obtained using automatic approach based on dic?

after bit of messing around, think following case.

first, dic deviance obtained using discrete set of samples main parameter estimates. thus, estimates differ between runs due inherent random aspects of mcmc estimation.

assuming chain length long (several thousand) , burnin adequate , differences between main samples , dic samples should small if model converging , reasonably efficient. thus, big differences co-occur model convergence issues.

this assumes original deviance calculation programmed correctly.


Comments

Popular posts from this blog

c++ - Delete matches in OpenCV (Keypoints and descriptors) -

java - Could not locate OpenAL library -

sorting - opencl Bitonic sort with 64 bits keys -