Poster
IGD: Instructional Graphic Design with Multimodal Layer Generation
Yadong Qu · Shancheng Fang · Yuxin Wang · Xiaorui Wang · Zhineng Chen · Hongtao Xie · Yongdong Zhang
Graphic design visually conveys information and data by creating and combining text, images and graphics. Two-stage methods that rely primarily on layout generation lack creativity and intelligence, making graphic design still labor-intensive. Existing diffusion-based methods generate non-editable graphic design files at image level with poor legibility in visual text rendering, which prevents them from achieving satisfactory and practical automated graphic design. In this paper, we propose Instructional Graphic Designer (IGD) to swiftly generate multimodal layers with editable flexibility with only natural language instructions. IGD adopts a new paradigm that leverages parametric rendering and image asset generation. First, we develop a design platform and establish a standardized format for multi-scenario design files, thus laying the foundation for scaling up data. Second, IGD utilizes the multimodal understanding and reasoning capabilities of MLLM to accomplish attribute prediction, sequencing and layout of layers. It also employs a diffusion model to generate image content for assets. By enabling end-to-end training, IGD architecturally supports scalability and extensibility in complex graphic design tasks. Notably, IGD is the first method to combine creativity with the ability to generate editable multimodal layers. The superior experimental results demonstrate that IGD offers a new solution for graphic design.
Live content is unavailable. Log in and register to view live content