Data analysis: Estimating the confidence of image processing results directly on the data
One of the most salient problems when using image processing results to model complex biological systems is that the accuracy and reliability of these results are hardly clear. Without knowing the error probability distribution, however, it is impossible to use model inference techniques or perform statistical tests to differentiate between experimental conditions. It is therefore good practice to carefully benchmark image-processing algorithms on real or artificial data by comparing their output to hand-processed data or known ground truth over a range of different signal-to-noise ratios. In this approach, however, it is not necessarily clear that the data used for benchmarking are equivalent to the images the algorithm is later used on. Even worse, benchmarks only provide average accuracies, while in real applications photo-bleaching or algorithm breakdowns lead to accuracies that change over time.
We have recently extended a specific image-processing framework to not only provide the results of the image processing, but also the associated confidence intervals. Since this is done directly on the image data, there is no need for prior benchmarking on test images. The new algorithm directly provides the per-frame confidence estimates and enables statistical evaluation of the results and automatic detection of breakdowns of the image-processing algorithm, prompting potential user intervention. We have demonstrated that the confidence estimates provided by the new algorithm are conservative and accurate to within a few nanometers.
J. Cardinale, A. Rauch, Y. Barral, G. Székely, and I. F. Sbalzarini. Bayesian image analysis with on-line confidence estimates and its application to microtubule tracking. In Proc. IEEE Intl. Symposium Biomedical Imaging (ISBI), pages 1091-1094, Boston, USA, June 2009.