TY - GEN
T1 - IconDM
T2 - 32nd ACM International Conference on Multimedia, MM 2024
AU - Lin, Jiawei
AU - Jiang, Zhaoyun
AU - Guo, Jiaqi
AU - Sun, Shizhao
AU - Liu, Ting
AU - Yang, Zijiang
AU - Lou, Jian Guang
AU - Zhang, Dongmei
N1 - Publisher Copyright:
© 2024 ACM.
PY - 2024/10/28
Y1 - 2024/10/28
N2 - Icons are ubiquitous visual elements in graphic design, yet their creation is often complex and time-consuming. To resolve this problem, we draw inspiration from the booming text-to-image field and propose Text-Guided Icon Set Expansion, a novel task that helps users design high-quality icons using textual descriptions. Besides, users can control the style consistency of the created icons by inputting a few hand-crafted icons as style reference. Despite its practicality, the task poses two unique challenges. (i) Abstract Concept Visualization. Abstract concepts like technology and health are frequently encountered in icon creation, but their visualization is not straightforward and requires a grounding process that translates them into physical, easy-to-depict objects. (ii) Fine-grained Style Transfer. Unlike ordinary images, icons exhibit richer fine-grained stylistic elements, including tones, line widths, shapes, shadow effects, etc., which puts higher demands on capturing and preserving detailed styles during icon generation. To address the challenges, we propose IconDM, a method based on pre-trained text-to-image (T2I) diffusion models. Our approach incorporates a one-time domain adaptation process and an online style transfer process. In domain adaptation, we enhance the existing T2I model's capability to understand abstract concepts by fine-tuning it on high-quality icon-text pairs. To achieve this, we construct a large-scale dataset IconBank containing 2.3 million icon samples, and leverage a state-of-the-art vision-language model to generate textual descriptions for each icon. In style transfer, we introduce a Style Enhancement Module into the T2I model. It explicitly extracts the fine-grained style features from the given reference icons and is jointly optimized with the T2I model during DreamBooth tuning. To assess IconDM, we present IconBench, a structured evaluation suite with 30 icon sets and 100 concepts (including 50 abstract concepts). Quantitative results, qualitative analysis, and extensive ablation studies demonstrate the effectiveness of IconDM.
AB - Icons are ubiquitous visual elements in graphic design, yet their creation is often complex and time-consuming. To resolve this problem, we draw inspiration from the booming text-to-image field and propose Text-Guided Icon Set Expansion, a novel task that helps users design high-quality icons using textual descriptions. Besides, users can control the style consistency of the created icons by inputting a few hand-crafted icons as style reference. Despite its practicality, the task poses two unique challenges. (i) Abstract Concept Visualization. Abstract concepts like technology and health are frequently encountered in icon creation, but their visualization is not straightforward and requires a grounding process that translates them into physical, easy-to-depict objects. (ii) Fine-grained Style Transfer. Unlike ordinary images, icons exhibit richer fine-grained stylistic elements, including tones, line widths, shapes, shadow effects, etc., which puts higher demands on capturing and preserving detailed styles during icon generation. To address the challenges, we propose IconDM, a method based on pre-trained text-to-image (T2I) diffusion models. Our approach incorporates a one-time domain adaptation process and an online style transfer process. In domain adaptation, we enhance the existing T2I model's capability to understand abstract concepts by fine-tuning it on high-quality icon-text pairs. To achieve this, we construct a large-scale dataset IconBank containing 2.3 million icon samples, and leverage a state-of-the-art vision-language model to generate textual descriptions for each icon. In style transfer, we introduce a Style Enhancement Module into the T2I model. It explicitly extracts the fine-grained style features from the given reference icons and is jointly optimized with the T2I model during DreamBooth tuning. To assess IconDM, we present IconBench, a structured evaluation suite with 30 icon sets and 100 concepts (including 50 abstract concepts). Quantitative results, qualitative analysis, and extensive ablation studies demonstrate the effectiveness of IconDM.
KW - denoising diffusion models
KW - icon generation
KW - style transfer
KW - text-to-image
UR - https://www.scopus.com/pages/publications/85204413666
U2 - 10.1145/3664647.3681057
DO - 10.1145/3664647.3681057
M3 - 会议稿件
AN - SCOPUS:85204413666
T3 - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
SP - 156
EP - 165
BT - MM 2024 - Proceedings of the 32nd ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 28 October 2024 through 1 November 2024
ER -