What is MultiDiffusion?
Also known as: Tiled Diffusion
A diffusion-model technique that constrains sampling to produce seamlessly tileable output natively, without offset-and-inpaint post-processing. The architectural answer to 'AI-generate seamless patterns directly'.
In detail
MultiDiffusion (CVPR 2023, Bar-Tal et al.) modifies the diffusion process so the model generates with awareness of seamless constraints. At each denoising step, the U-Net's predictions are computed for multiple overlapping tile windows, and the predictions in the overlap regions are averaged — forcing edge continuity. The result: every pixel in the final tile was sampled with knowledge that the edges must match. Compared to standard offset-and-inpaint (heal the seam after generation), MultiDiffusion produces more coherent results because the model never made a non-seamless tile to begin with. Trade-offs: requires self-hosting (no SaaS API as of 2026), 2-4× slower than standard generation, supports limited model architectures.
Example
A standard SDXL generation produces a beautiful floral pattern with random non-matching edges; the offset-and-inpaint post-process heals them but the heal band is sometimes visible. The same prompt via MultiDiffusion produces a tile where the entire floral was generated with seamless awareness — every flower's stem, every leaf, the background gradient all match perfectly across edges with no post-processing.