IconDM: Text-Guided Icon Set Expansion Using Diffusion Models

  • Jiawei Lin
  • , Zhaoyun Jiang
  • , Jiaqi Guo
  • , Shizhao Sun
  • , Ting Liu
  • , Zijiang Yang
  • , Jian Guang Lou
  • , Dongmei Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Icons are ubiquitous visual elements in graphic design, yet their creation is often complex and time-consuming. To resolve this problem, we draw inspiration from the booming text-to-image field and propose Text-Guided Icon Set Expansion, a novel task that helps users design high-quality icons using textual descriptions. Besides, users can control the style consistency of the created icons by inputting a few hand-crafted icons as style reference. Despite its practicality, the task poses two unique challenges. (i) Abstract Concept Visualization. Abstract concepts like technology and health are frequently encountered in icon creation, but their visualization is not straightforward and requires a grounding process that translates them into physical, easy-to-depict objects. (ii) Fine-grained Style Transfer. Unlike ordinary images, icons exhibit richer fine-grained stylistic elements, including tones, line widths, shapes, shadow effects, etc., which puts higher demands on capturing and preserving detailed styles during icon generation. To address the challenges, we propose IconDM, a method based on pre-trained text-to-image (T2I) diffusion models. Our approach incorporates a one-time domain adaptation process and an online style transfer process. In domain adaptation, we enhance the existing T2I model's capability to understand abstract concepts by fine-tuning it on high-quality icon-text pairs. To achieve this, we construct a large-scale dataset IconBank containing 2.3 million icon samples, and leverage a state-of-the-art vision-language model to generate textual descriptions for each icon. In style transfer, we introduce a Style Enhancement Module into the T2I model. It explicitly extracts the fine-grained style features from the given reference icons and is jointly optimized with the T2I model during DreamBooth tuning. To assess IconDM, we present IconBench, a structured evaluation suite with 30 icon sets and 100 concepts (including 50 abstract concepts). Quantitative results, qualitative analysis, and extensive ablation studies demonstrate the effectiveness of IconDM.

Original languageEnglish
Title of host publicationMM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
PublisherAssociation for Computing Machinery, Inc
Pages156-165
Number of pages10
ISBN (Electronic)9798400706868
DOIs
StatePublished - 28 Oct 2024
Event32nd ACM International Conference on Multimedia, MM 2024 - Melbourne, Australia
Duration: 28 Oct 20241 Nov 2024

Publication series

NameMM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia

Conference

Conference32nd ACM International Conference on Multimedia, MM 2024
Country/TerritoryAustralia
CityMelbourne
Period28/10/241/11/24

Keywords

  • denoising diffusion models
  • icon generation
  • style transfer
  • text-to-image

Fingerprint

Dive into the research topics of 'IconDM: Text-Guided Icon Set Expansion Using Diffusion Models'. Together they form a unique fingerprint.

Cite this