Fundus Image Generation using EyeGAN

An improved Generative Adversarial Network model

Authors

  • Preeti Kapoor Department of Computer Science Engineering, School of Engineering, The NorthCap University, Gurugram, Haryana, India 122017
  • Shaveta Arora Department of Computer Science Engineering, School of Engineering, The NorthCap University, Gurugram, Haryana, India 122017

DOI:

https://doi.org/10.57159/gadl.jcmm.2.6.230106

Keywords:

Deep Learning, FID, Conditional GAN, Style GAN

Abstract

Deep learning models are widely used in various computer vision fields ranging from classification, segmentation to identification, but these models suffer from the problem of overfitting. Diversifying and balancing the datasets is a solution to the primary problem. Generative Adversarial Networks (GANs) are unsupervised learning image generators which do not require any additional information. GANs generate realistic images and preserve the minute details from the original data. In this paper, a GAN model is proposed for fundus image generation to overcome the problem of labelled data insufficiency faced by researchers in detection and classification of various fundus diseases. The proposed model enriches and balances the studied datasets for improving the eye disease detection systems. EyeGAN is a nine-layered structure based on conditional GAN which generates unbiased, good quality, credible images and outperforms the existing GAN models by achieving the least Fréchet Inception Distance of 226.3. The public fundus datasets MESSIDOR I and MESSIDOR II are expanded by 1600 and 808 synthetic images respectively.

References

A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018.

H. Lee, M. Ra, and W.-Y. Kim, “Nighttime data augmentation using gan for improving blind-spot detection,” IEEE Access, vol. 8, pp. 48049–48059, 2020.

B. Liu, C. Tan, S. Li, J. He, and H. Wang, “A data augmentation method based on generative adversarial networks for grape leaf disease identification,” IEEE Access, vol. 8, pp. 102188–102198, 2020.

A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, and P. R. Pinheiro, “Covidgan: Data augmentation using auxiliary classifier gan for improved covid-19 detection,” IEEE Access, vol. 8, pp. 91916–91923, 2020.

L. Lan et al., “Generative adversarial networks and its applications in biomedical informatics,” Frontiers in Public Health, vol. 8, p. 164, 2020.

C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, p. 60, 2019.

Q. Jin, X. Luo, Y. Shi, and K. Kita, “Image generation method based on improved condition GAN,” in 2019 6th International Conference on Systems and Informatics (ICSAI), pp. 1290–1294, IEEE, 2019.

Y. Ikeda, K. Doman, Y. Mekada, and S. Nawano, “Lesion image generation using conditional gan for metastatic liver cancer detection,” Journal of Image and Graphics, vol. 9, no. 1, 2021.

D. Li, W. Xie, B. Wang, W. Zhong, and H. Wang, “Data augmentation and layered deformable mask r-cnn-based detection of wood defects,” IEEE Access, vol. 9, pp. 108162–108174, 2021.

X. Cao, H. Wei, P. Wang, C. Zhang, S. Huang, and H. Li, “High quality coal foreign object image generation method based on stylegan-dsad,” Sensors, vol. 23, no. 1, p. 374, 2022.

P. Kapoor and S. Arora, Applications of Deep Learning in Diabetic Retinopathy Detection and Classification: A Critical Review, vol. 91 of Lecture Notes on Data Engineering and Communications Technologies, pp. 505–535. Springer Singapore, 2022.

J. Yang, Z. Zhao, H. Zhang, and Y. Shi, “Data augmentation for x-ray prohibited item images using generative adversarial networks,” IEEE Access, vol. 7, pp. 28894–28902, 2019.

Downloads

Published

31-12-2023

How to Cite

[1]
P. Kapoor and S. Arora, “Fundus Image Generation using EyeGAN: An improved Generative Adversarial Network model”, J. Comput. Mech. Manag, vol. 2, no. 6, pp. 9–17, Dec. 2023.

Issue

Section

Original Articles

Categories

Received 2023-11-14
Accepted 2023-12-04
Published 2023-12-31