Background: Generative Adversarial Networks (GANs) can synthesize brain images from image or noise input. So far, the gold standard for assessing the quality of the generated images has been human expert ratings. However, due to limitations of human assessment in terms of cost, scalability, and the limited sensitivity of the human eye to more subtle statistical relationships, a more automated approach towards evaluating GANs is required.
New method: We investigated to what extent visual quality can be assessed using image quality metrics and we used group analysis and spatial independent components analysis to verify that the GAN reproduces multivariate statistical relationships found in real data. Reference human data was obtained by recruiting neuroimaging experts to assess real Magnetic Resonance (MR) images and images generated by a Wasserstein GAN. Image quality was manipulated by exporting images at different stages of GAN training.
Results: Experts were sensitive to changes in image quality as evidenced by ratings and reaction times, and the generated images reproduced group effects (age, gender) and spatial correlations moderately well. We also surveyed a number of image quality metrics which consistently failed to fully reproduce human data. While the metrics Structural Similarity Index Measure (SSIM) and Naturalness Image Quality Evaluator (NIQE) showed good overall agreement with human assessment for lower-quality images (i.e. images from early stages of GAN training), only a Deep Quality Assessment (QA) model trained on human ratings was sensitive to the subtle differences between higher-quality images.
Conclusions: We recommend a combination of group analyses, spatial correlation analyses, and both distortion metrics (SSIM, NIQE) and perceptual models (Deep QA) for a comprehensive evaluation and comparison of brain images produced by GANs.