All glossary terms
AI & Generation

What is Diffusion model?

A class of generative AI models that produce images by iteratively denoising random Gaussian noise into coherent imagery. The dominant architecture for AI image generation in 2026, including textile pattern AI.

In detail

Diffusion models work by learning to reverse a noise-addition process. During training, the model sees pairs of (image, image-with-noise-added) and learns to predict the noise that was added so it can be subtracted. At inference time, the model starts with pure random noise and iteratively subtracts predicted noise over 20-50 steps, gradually producing a coherent image. Text prompts condition this process via cross-attention layers that connect text embeddings to image features. Modern diffusion models include Stable Diffusion XL (open-source, 2.6B parameters), FLUX (Black Forest Labs, 12B parameters, 2024), Imagen (Google), and DALL-E 3 (OpenAI, accessed via API only). Diffusion models do not natively produce seamless tiles — that constraint requires post-processing (offset-and-inpaint) or specialized tile-aware sampling (MultiDiffusion).

Example

Texloom's AI Pattern Generator uses Stable Diffusion XL with a textile-tuned prompt. A user types 'small-scale watercolor floral, soft pink and sage' — SDXL starts from random noise and converges over 30 denoising steps to a coherent floral pattern matching the prompt. The output is then offset-and-inpaint healed to make it seamlessly tileable.

Related terms

Go deeper