Fundus Image Generation using EyeGAN
An improved Generative Adversarial Network model
DOI:
https://doi.org/10.57159/gadl.jcmm.2.6.230106Keywords:
Deep Learning, FID, Conditional GAN, Style GANAbstract
Deep learning models are widely used in various computer vision fields ranging from classification, segmentation to identification, but these models suffer from the problem of overfitting. Diversifying and balancing the datasets is a solution to the primary problem. Generative Adversarial Networks (GANs) are unsupervised learning image generators which do not require any additional information. GANs generate realistic images and preserve the minute details from the original data. In this paper, a GAN model is proposed for fundus image generation to overcome the problem of labelled data insufficiency faced by researchers in detection and classification of various fundus diseases. The proposed model enriches and balances the studied datasets for improving the eye disease detection systems. EyeGAN is a nine-layered structure based on conditional GAN which generates unbiased, good quality, credible images and outperforms the existing GAN models by achieving the least Fréchet Inception Distance of 226.3. The public fundus datasets MESSIDOR I and MESSIDOR II are expanded by 1600 and 808 synthetic images respectively.
References
A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath, “Generative adversarial networks: An overview,” IEEE Signal Processing Magazine, vol. 35, no. 1, pp. 53–65, 2018.
H. Lee, M. Ra, and W.-Y. Kim, “Nighttime data augmentation using gan for improving blind-spot detection,” IEEE Access, vol. 8, pp. 48049–48059, 2020.
B. Liu, C. Tan, S. Li, J. He, and H. Wang, “A data augmentation method based on generative adversarial networks for grape leaf disease identification,” IEEE Access, vol. 8, pp. 102188–102198, 2020.
A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, and P. R. Pinheiro, “Covidgan: Data augmentation using auxiliary classifier gan for improved covid-19 detection,” IEEE Access, vol. 8, pp. 91916–91923, 2020.
L. Lan et al., “Generative adversarial networks and its applications in biomedical informatics,” Frontiers in Public Health, vol. 8, p. 164, 2020.
C. Shorten and T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 1, p. 60, 2019.
Q. Jin, X. Luo, Y. Shi, and K. Kita, “Image generation method based on improved condition GAN,” in 2019 6th International Conference on Systems and Informatics (ICSAI), pp. 1290–1294, IEEE, 2019.
Y. Ikeda, K. Doman, Y. Mekada, and S. Nawano, “Lesion image generation using conditional gan for metastatic liver cancer detection,” Journal of Image and Graphics, vol. 9, no. 1, 2021.
D. Li, W. Xie, B. Wang, W. Zhong, and H. Wang, “Data augmentation and layered deformable mask r-cnn-based detection of wood defects,” IEEE Access, vol. 9, pp. 108162–108174, 2021.
X. Cao, H. Wei, P. Wang, C. Zhang, S. Huang, and H. Li, “High quality coal foreign object image generation method based on stylegan-dsad,” Sensors, vol. 23, no. 1, p. 374, 2022.
P. Kapoor and S. Arora, Applications of Deep Learning in Diabetic Retinopathy Detection and Classification: A Critical Review, vol. 91 of Lecture Notes on Data Engineering and Communications Technologies, pp. 505–535. Springer Singapore, 2022.
J. Yang, Z. Zhao, H. Zhang, and Y. Shi, “Data augmentation for x-ray prohibited item images using generative adversarial networks,” IEEE Access, vol. 7, pp. 28894–28902, 2019.
Downloads
Published
How to Cite
License
Copyright (c) 2023 Journal of Computers, Mechanical and Management
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
The Journal of Computers, Mechanical and Management applies the CC Attribution- Non-Commercial 4.0 International License to its published articles. While retaining copyright ownership of the content, the journal permits activities such as downloading, reusing, reprinting, modifying, distributing, and copying of the articles, as long as the original authors and source are appropriately cited. Proper attribution is ensured by citing the original publication.
Accepted 2023-12-04
Published 2023-12-31