Deep learning-based methods deliver state-of-the-art performance for solving inverse problems that arise in computational imaging. These methods can be broadly divided into two groups: (1) learn a network to map measurements to the signal estimate, which is known to be fragile; (2) learn a prior for the signal to use in an optimization-based recovery. Despite the impressive results from the latter approach, many of these methods also lack robustness to shifts in data distribution, measurements, and noise levels. Such domain shifts result in a performance gap and in some cases introduce undesired artifacts in the estimated signal. In this paper, we explore the qualitative and quantitative effects of various domain shifts and propose a flexible and parameter efficient framework that adapt pretrained networks to such shifts. We demonstrate the effectiveness of our method for a number of natural image, MRI, and CT reconstructions tasks under domain, measurement model, and noise-level shifts. Our experiments demonstrate that our method provides significantly better performance and parameter efficiency compared to existing domain adaptation techniques.
We performed a number of experiments to analyze the effects of shifts in different parts of the inverse problem. The shifts can occur in the data distribution \( \mathbf{x} \), the forward model \( \mathbf{A} \), and the measurement noise \( \eta \). we start with a fixed base network, which we refer to as Base AR, and learn domain-specific rank-one modulations. Base AR is trained to reconstruct MR images from \( 4\times\) radially sub-sampled Fourier measurements without any measurement noise.
@misc{yismaw2023domain,
title={Domain Expansion via Network Adaptation for Solving Inverse Problems},
author={Nebiyou Yismaw and Ulugbek S. Kamilov and M. Salman Asif},
year={2023},
eprint={2310.06235},
archivePrefix={arXiv},
primaryClass={eess.IV}
}