GD-StarGAN: Multi-domain image-to-image translation in garment design

PLoS One. 2020 Apr 21;15(4):e0231719. doi: 10.1371/journal.pone.0231719. eCollection 2020.

Abstract

In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-to-image translation has made great progress in recent years. One of the image-to-image translation models--StarGAN, has realized the function of multi-domain image-to-image translation by using only a single generator and a single discriminator. This paper details the use of StarGAN to complete the task of garment design. Users only need to input an image and a label for the garment type to generate garment images with the texture of the input image. However, it was found that the quality of the generated images is not satisfactory. Therefore, this paper introduces some improvements on the structure of the StarGAN generator and the loss function of StarGAN, and a model was obtained that can be better applied to garment design. It is called GD-StarGAN. This paper will demonstrate that GD-StarGAN is much better than StarGAN when it comes to garment design, especially in texture, by using a set of seven categories of garment datasets.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Clothing*
  • Humans
  • Image Processing, Computer-Assisted / methods*
  • Neural Networks, Computer

Grants and funding

This work was supported in part by the Social Sciences and Humanities of the Ministry of Education of China under Grant 18YJC88002, in part by the Guangdong Provincial Key Platform and Major Scientific Research Projects Featured Innovation Projects under Grant 2017GXJK136, and in part by the Guangzhou Innovation and Entrepreneurship Education Project under Grant 201709P14.