From 12fa9509f644ca1bb84c5869fe9f54ed1447cafc Mon Sep 17 00:00:00 2001 From: Xavier Date: Tue, 20 Sep 2022 21:55:16 -0700 Subject: Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 7e8f936..0b6a842 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,8 @@ The implementation makes minimum changes over the official codebase of Textual I ## Usage ### Preparation +First set-up the ```ldm``` enviroment following the instruction from textual inversion repo, or the original Stable Diffusion repo. + To fine-tune a stable diffusion model, you need to obtain the pre-trained stable diffusion models following their [instructions](https://github.com/CompVis/stable-diffusion#stable-diffusion-v1). Weights can be downloaded on [HuggingFace](https://huggingface.co/CompVis). You can decide which version of checkpoint to use, but I use ```sd-v1-4-full-ema.ckpt```. We also need to create a set of images for regularization, as the fine-tuning algorithm of Dreambooth requires that. Details of the algorithm can be found in the paper. Note that in the original paper, the regularization images seem to be generated on-the-fly. However, here I generated a set of regularization images before the training. The text prompt for generating regularization images can be ```photo of a ```, where `````` is a word that describes the class of your object, such as ```dog```. The command is -- cgit v1.2.3