ControlNet: Difference between revisions
StableTiger3 (talk | contribs) mNo edit summary |
StableTiger3 (talk | contribs) mNo edit summary |
||
| Line 10: | Line 10: | ||
== Canny == | == Canny == | ||
Canny detects the edges of objects in an image. It produces a layout for the output follow. Works well with single objects or images with very simple backgrounds. | Canny detects the edges of objects in an image. It produces a layout for the output follow. Works well with single objects or images with very simple backgrounds. | ||
[[File:CannyExample.png|center|thumb|450x450px|Canny Example]] | |||
Revision as of 17:20, 21 August 2023
ControlNet can best be described by the the very scientists who developed it: "We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k)." https://arxiv.org/abs/2302.05543
In other words, in addition to the word prompts and other numeric parametric inputs, the user can introduce an additional model to further indicate the desired output. There are many different ways this is done. The important point is that whereas before, users were working with one single model (a) and manually adjusting parameters, with ControlNet users are introducing an additional small model (b) that has much more capabilities of influencing the outputs.
Control Types
Canny
Canny detects the edges of objects in an image. It produces a layout for the output follow. Works well with single objects or images with very simple backgrounds.