Skip to main content
Back to top
Ctrl
+
K
Abstract
Main
1. Introduction
2. Results
2.1. The risk as a selection criterion for already fitted models
2.2. Example application: Selecting among disparate parameter sets of a biophysical model
2.3. A conceptual model for replicate variations
2.4. Model discrepancy as a baseline for non-stationary replications
2.5.
\(δ^{\mathrm{EMD}}\)
: Expressing misspecification as a discrepancy between CDFs
2.6.
\(\qproc\)
: A stochastic process on quantile functions
2.7. Calibrating and validating the
\(\Bemd{}\)
2.8. Characterizing the behaviour of
\(R\)
-distributions
2.9. Different criteria embody different notions of robustness
3. Discussion
4. Methods
4.1. Poisson noise model for black body radiation observations
4.2. Neuron model
4.3. Construction of an
\(R\)
-distribution from an HB process 𝒬
4.4. Calibration experiments
4.5. The hierarchical beta process
Formal
Data availability
Acknowledgments
References
Supplementary
5. Supplementary Methods
6. Supplementary Results
7. Supplementary Discussion
Notebooks
Pedagogical figures
Code for comparing models of blackbody radiation
Code for comparing models of the pyloric circuit
Comparison with other criteria – large sample asymptotics
Comparison with other criteria – inconclusive data
Implementations of common model comparison criteria
MDL (minimum description length) utility functions
Aleatoric vs Sampling vs Replication uncertainty
Effect of
\(c\)
on the
\(R\)
-distributions in the Prinz model
Effect of
\(c\)
on calibration curves
Rejection probability is not predicted by loss distribution
Task definition for testing using the
\(Q\)
distribution for model comparison
Index