<aside> 😟 Full discolsure ! I am not an expert in training and fine-tuning models at all. I’ve barely trained under 10 LoRAs in my spare time, and done a handful of fine-tunes. Training is 100% something you need to get a grasp of. To really understand how it works, you need to experiment, experiment, experiment. Each concept/subject is different and reacts to different settings, and it’s difficult to understand which settings play significant roles during training. What I’m trying to say is — don’t take my word for it, also do your own research !

</aside>


Now that we have an amazing dataset and that we know exactly what we are training (a style/subject/etc…), we can move on to the final step : actually training a model.

This is done using a process called Dreambooth, which was introduced by researchers from Google, which has the advantages of being very versatile and efficient.

<aside> 💡 Whether it’s a LoRA or a full fine-tune, the process is still the same, you’re still using Dreambooth, the settings are identical, but the output is different ! The only difference is that a fine-tune spits out a new entire model, with adjusted weights ; whereas a LoRA spits out some additional matrices that you can add to an existing model.

</aside>

What does it mean, training a model?

Training happens over time

Training is called that, because of the processes involved in training. Essentially, you give a model a dataset, and it is tasked with understanding your concept. It does so by trying to draw images of your concept over and over again. It never stops doing it, until you, the user, think the model got good enough.

The more concepts you’re training at once, the longer it’ll take. The more difficult your concept is, the longer it’ll take.

<aside> 🎂 It’s very much like a cake in the oven. The bigger it is, the longer it’ll take, the warmer the oven has to be. Leave it in for too long and it’ll taste bad. Leave it in for not long enough, and it’ll be undercooked.

</aside>