What is LoRA?
Also known as: Low-Rank Adaptation
Low-Rank Adaptation — a fine-tuning technique that adds small trainable matrices to a frozen base diffusion model. Lets users customize Stable Diffusion or FLUX for specific aesthetics with minimal compute.
In detail
LoRA is the dominant fine-tuning approach for diffusion models in 2026. Instead of retraining the full 2.6B-12B parameter base model, LoRA adds small low-rank matrices (typically 1-100 MB) on top that bias the model's behavior. Users can train a LoRA on 20-50 example images of a target aesthetic (Toile, Liberty florals, Marimekko geometrics, batik, ikat) and then apply it at inference time alongside the base model. Multiple LoRAs can be stacked. The textile AI ecosystem has dozens of style-specific LoRAs available; Texloom uses curated LoRAs for specific textile traditions when users select corresponding style presets.
Example
A Toile-de-Jouy LoRA trained on 47 images of vintage French toile fabrics. Applied to SDXL with prompt 'pastoral scene with shepherds and trees, monochrome blue, fine line engraving': the LoRA biases SDXL output toward authentic toile aesthetics — fine line work, monochrome ink-print quality, period-appropriate motifs. Without the LoRA the same prompt produces generic illustrations.