Denoising x-ray images using neural networks
At every step of High-Energy-Density (HED) experiments, noise is added to the resulting x-ray images, which affects the inference of the imaged material’s properties. As such, denoising becomes as important as the imaging experiments themselves. Based on the combination of x-ray source and detector effects, different forms and amounts of noise can be present, making it important for researchers to find denoising methods tailored to their data.
Levesque et al. specifically needed a method to remove noise affecting small-scale fluctuations. By training a neural network model, they were able to significantly reduce noise in experiments designed for studying Rayleigh-Taylor and Richtmyer-Meshkov instabilities.
“We’re able to train our models using a set of natural images broken into smaller patches and corrupted with a simple noise model,” said author Joseph Levesque. “The network architecture combines some existing ideas in the machine learning image reconstruction field, with modifications chosen to better represent the noise contributions of our data.”
The denoiser is trained to remove a broad range of noise estimated from the team’s data, and its performance on test images gives confidence in its applicability to the data. After training, the model can be readily applied to any image without additional tuning parameters, and if the noise of a system falls outside of the range of the model, it is relatively easy to train a new model with the same architecture and methodology.
“In principle, this general model could be coupled with more precise forward noise models to eventually denoise nearly all x-ray-based imaging diagnostics,” said Levesque.
Source: “Neural network denoising of x-ray images from high-energy-density experiments,” by Joseph M. Levesque, Elizabeth C. Merritt, Kirk A. Flippo, Alexander M. Rasmus, and Forrest W. Doss, Review of Scientific Instruments (2024). The article can be accessed at https://doi.org/10.1063/5.0207005 .