Modern supernova cosmological analysis requires a dizzying collection of programs, scripts, and tools in order to effectively measure details of our Universe from exploding stars. Some of these tools include SNANA, SuperNNova, CosmoSIS, and Pippin. Although each of these tools has individually been tested to ensure they work correctly, it's still important to test that when we combine them all together we still get a sensible answer. The final product of a supernova cosmological analysis is often a bayesian contour, which tells us how likely different cosmological parameters are (See Figure 1 for an example of a bayesian contour). What we really want to know is whether this contour is "correct", or in other words, is our combination of tools over or underestimating our cosmological likelihoods?
Previous project, such as Brout. D.; Scolnic, D.; et. al. (2018) have validated these contours by running lots of simulations, and showing that "on average" the contour is correct. However, our data is not the average of many different universes, it's a single Universe, so we really want to show that our contours are correct for a single dataset, rather than after averaging over many datasets.
In this project, we aim to provide a method for validating cosmological contours of single datasets. In order to do this, we need a way of calculating the "correct" contour, which we can then compare to our bayesian contour and see how well they match. The method we use to do this is a frequentist method called the Neyman construction. Without getting in to too much detail, the Neyman construction is a way to approximate a contour by doing lots of experiments at different points in your parameter space. What we want to do is use this Neyman contour as a way of validating our bayesian contour.