🎨LineArt:
A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model

CVPR 2025

Xi Wang1,   Hongzhen Li1,   Heng Fang2,   Yichen Peng3,   Haoran Xie4,   Xi Yang*1,   Chuntao Li1,

1Jilin University     2KTH Royal Institute of Technology     3Institute of Science Tokyo 4Japan Advanced Institute of Science and Technology (JAIST)
Image 1 Image 2 Image 3 Image 4
Teaser Image

Our work uses design line drawings and reference appearance images to generate corresponding results.

Abstract

Image rendering from line drawings is vital in design and image generation technologies reduce costs, yet professional line drawings demand preserving complex details. Text prompts struggle with accuracy, and image translation struggles with consistency and fine-grained control. We present LineArt, a framework that transfers complex appearance onto detailed design drawings, facilitating design and artistic creation. It generates high-fidelity appearance while preserving structural accuracy by simulating hierarchical visual cognition and integrating human artistic experience to guide the diffusion process. LineArt overcomes the limitations of current methods in terms of difficulty in fine-grained control and style degradation in design drawings. It requires no precise 3D modeling, physical property specs, or network training, making it more convenient for design tasks. LineArt consists of two stages: a multi-frequency lines fusion module to supplement the input design drawing with detailed structural information and a two-part painting process for Base Layer Shaping and Surface Layer Coloring. We also present a new design drawing dataset ProLines for evaluation. The experiments show that LineArt performs better in accuracy, realism, and material precision compared to SOTAs.

Method


Method Overview

Our Workflow: The process begins with a design drawing Loriginal and an appearance image Iappearance. Depth-based ControlNet estimates depth and generates soft edges to guide the synthesis. (a) The Multi-frequency Line Fusion module employs assertion-guided techniques to enhance structural detail control. (b) Base Layer Shaping decomposes the illumination of the appearance image using a multi-scale retinex approach, generating retinex illumination layers Lretinex to balance brightness. (c) Surface Layer Coloring refines the output by utilizing layout and style blocks in a U-net with cross-attention for accurate material embedding.



ProLine Dataset


dataset

Construction of ProLine Dataset: (b) shows an overview of the data after the initial screening based on image complexity and manual removal of noise data. (c) shows the data preprocessing of the selected data, including the automatic processing process of mask and three rounds of manual verification. After (b)(c) two processes, we obtained 5101 precious line drawings.

proline

Result Gallery


We use design line drawings and reference appearance images to generate corresponding results. Here are the appearance images and results.

Paper


LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model

Xi Wang, Hongzhen Li, Heng Fang, Yichen Peng, Haoran Xie, Xi Yang and Chuntao Li

                @misc{wang2024lineart,
                   title={LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model},
                   author={Xi Wang and Hongzhen Li and Heng Fang and Yichen Peng and Haoran Xie and Xi Yang and Chuntao Li},
                   year={2024},
                   eprint={2412.11519},
                   archivePrefix={arXiv},
                   primaryClass={cs.CV},
                   url={https://arxiv.org/abs/2412.11519},
                        }