All glossary terms
AI & Generation

What is ControlNet?

A diffusion model add-on that conditions generation on a structural input — pose, depth map, edge map, or layout — alongside the text prompt. Enables precise control over AI output.

In detail

ControlNet adds a second input channel to a diffusion model. Alongside the text prompt, the model receives a structural reference (pose skeleton, depth map, Canny edges, segmentation mask, scribble) that constrains the spatial layout of the output. Common ControlNet types: Canny (edge-following), Depth (3D structure), OpenPose (human pose), Tile (preserve fine detail), Scribble (rough sketch guidance). Textile applications include using ControlNet Tile to preserve user-uploaded sketch structure while applying AI style, ControlNet Canny to keep motif outlines stable across colorway variations, and ControlNet Depth for 3D fabric drape rendering.

Example

A designer sketches a rough floral on paper, scans it, and runs ControlNet Canny + SDXL with prompt 'watercolor peony, dusty pink and sage'. The output preserves the exact composition of the sketch (Canny edges) but renders it in detailed watercolor — much more controllable than text-only generation.

Related terms

Go deeper