<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://stablediffusionwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=StableTiger3</id>
	<title>Stable Diffusion Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://stablediffusionwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=StableTiger3"/>
	<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php/Special:Contributions/StableTiger3"/>
	<updated>2026-05-04T19:55:55Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.4</generator>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=282</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=282"/>
		<updated>2024-06-16T01:18:18Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:SD3 Example.png|alt=Photo generated via SD protovisionXL modele by user kashyyyk|frame|Example Photo generated via SD3 engine. Published as an example by Stability AI on their announcement article 06/12/2024]]&lt;br /&gt;
Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 12 June 2024‎ ==&lt;br /&gt;
- Stability AI has launched &#039;&#039;&#039;[[Stable Diffusion 3 Medium]]&#039;&#039;&#039;, their most sophisticated text-to-image model to date. This model, optimized for consumer and enterprise GPUs, excels in photorealism, prompt understanding, and typography, addressing common issues in previous models. Released under an open non-commercial license and a low-cost Creator License, it encourages use by artists, designers, and developers, with an option for large-scale commercial licensing. &lt;br /&gt;
&lt;br /&gt;
Stability AI has collaborated with NVIDIA and AMD to enhance performance, resulting in significant efficiency improvements. The model is available for trial via the Stability Platform, Stable Assistant, and Discord&#039;s Stable Artisan. Safety measures have been implemented to prevent misuse, and continuous improvements based on user feedback are planned.&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 4 January 2024‎ ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
- To try it out, go to their website: https://stability.ai/stable-video&amp;lt;br /&amp;gt;&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=281</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=281"/>
		<updated>2024-06-16T01:17:05Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:SD3 Example.png|alt=Photo generated via SD protovisionXL modele by user kashyyyk|frame|Example Photo generated via SD3 engine. Published as an example by Stability AI on their announcement article 06/12/2024]]&lt;br /&gt;
Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 12 June 2024‎ ==&lt;br /&gt;
- Stability AI has launched &#039;&#039;&#039;[[Stable Diffusion 3 Medium]]&#039;&#039;&#039;, their most sophisticated text-to-image model to date. This model, optimized for consumer and enterprise GPUs, excels in photorealism, prompt understanding, and typography, addressing common issues in previous models. Released under an open non-commercial license and a low-cost Creator License, it encourages use by artists, designers, and developers, with an option for large-scale commercial licensing. &lt;br /&gt;
&lt;br /&gt;
Stability AI has collaborated with NVIDIA and AMD to enhance performance, resulting in significant efficiency improvements. The model is available for trial via the Stability Platform, Stable Assistant, and Discord&#039;s Stable Artisan. Safety measures have been implemented to prevent misuse, and continuous improvements based on user feedback are planned.&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 4 January 2024‎ ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_3_Medium&amp;diff=280</id>
		<title>Stable Diffusion 3 Medium</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_3_Medium&amp;diff=280"/>
		<updated>2024-06-16T00:28:19Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Announcing the Release of Stable Diffusion 3 Medium: Stability AI’s Advanced Image Generation Model ==&lt;br /&gt;
&lt;br /&gt;
Stability AI has introduced Stable Diffusion 3 Medium, the latest and most advanced model in their text-to-image generation series. This release marks a significant milestone in the evolution of generative AI, offering exceptional quality and versatility for both consumer and enterprise applications.&lt;br /&gt;
[[File:SD3 Example Images.jpg|center|frame|Stable Diffusion 3 Example Images]]&lt;br /&gt;
&lt;br /&gt;
=== Key Features ===&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium is a 2 billion parameter model known for its:&lt;br /&gt;
&lt;br /&gt;
Overall Quality and Photorealism: Produces images with remarkable detail, color, and lighting, addressing common pitfalls like realism in hands and faces.&lt;br /&gt;
Prompt Understanding: Capable of comprehending long and complex prompts involving spatial reasoning and compositional elements.&lt;br /&gt;
Typography: Delivers high-quality text outputs with fewer errors in spelling, kerning, and spacing.&lt;br /&gt;
Resource Efficiency: Operates effectively on standard consumer GPUs due to its low VRAM footprint.&lt;br /&gt;
Fine-Tuning: Easily customizable with small datasets for specific use cases.&lt;br /&gt;
=== Licensing and Accessibility ===&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium is available under the Stability Non-Commercial Research Community License and a low-cost Creator License, encouraging use by professional artists, designers, and developers. For large-scale commercial use, an Enterprise License can be obtained by contacting Stability AI directly.&lt;br /&gt;
&lt;br /&gt;
=== Collaboration and Optimization ===&lt;br /&gt;
&lt;br /&gt;
In collaboration with NVIDIA and AMD, Stability AI has optimized the model for various GPUs, resulting in significant performance enhancements. NVIDIA® RTX™ GPUs and TensorRT™ provide a 50% increase in performance, while AMD’s latest devices ensure optimized inference.&lt;br /&gt;
&lt;br /&gt;
=== Availability and Trials ===&lt;br /&gt;
&lt;br /&gt;
The model can be accessed through the Stability Platform API, with a free three-day trial available on Stable Assistant and Discord via Stable Artisan. This allows users to explore the capabilities of Stable Diffusion 3 Medium before committing to a license.&lt;br /&gt;
&lt;br /&gt;
=== Commitment to Safety ===&lt;br /&gt;
&lt;br /&gt;
Stability AI emphasizes safe and responsible AI practices. The model has undergone extensive testing to prevent misuse, with safeguards implemented throughout its development and deployment. Continuous collaboration with researchers and the community ensures ongoing improvements and innovation with integrity.&lt;br /&gt;
&lt;br /&gt;
=== Future Plans ===&lt;br /&gt;
&lt;br /&gt;
Stability AI plans to enhance Stable Diffusion 3 Medium based on user feedback, aiming to set new standards in AI-generated art. The community&#039;s input will be crucial in shaping future updates and expanding the model&#039;s features.&lt;br /&gt;
&lt;br /&gt;
For more details, visit Stability AI’s website or join their community on Twitter, Instagram, LinkedIn, and Discord.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium Official Release&lt;br /&gt;
Stability Platform API&lt;br /&gt;
Stable Assistant&lt;br /&gt;
Stable Artisan on Discord&lt;br /&gt;
Stability AI Safety Page&lt;br /&gt;
[[Category&lt;br /&gt;
Intelligence]]&lt;br /&gt;
[[Category&lt;br /&gt;
Generation]]&lt;br /&gt;
[[Category&lt;br /&gt;
Releases]]&lt;br /&gt;
[[Category&lt;br /&gt;
AI]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:SD3_Example_Images.jpg&amp;diff=279</id>
		<title>File:SD3 Example Images.jpg</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:SD3_Example_Images.jpg&amp;diff=279"/>
		<updated>2024-06-16T00:22:26Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Three images showing the power and capabilities of the SD3 model developed by Stability.ai&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=278</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=278"/>
		<updated>2024-06-16T00:20:29Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:SD3 Example.png|alt=Photo generated via SD protovisionXL modele by user kashyyyk|frame|Example Photo generated via SD3 engine. Published as an example by Stability AI on their announcement article 06/12/2024]]&lt;br /&gt;
Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 12 June 2024‎ ==&lt;br /&gt;
- Stability AI has launched &#039;&#039;&#039;[[Stable Diffusion 3 Medium]]&#039;&#039;&#039;, their most sophisticated text-to-image model to date. This model, optimized for consumer and enterprise GPUs, excels in photorealism, prompt understanding, and typography, addressing common issues in previous models. Released under an open non-commercial license and a low-cost Creator License, it encourages use by artists, designers, and developers, with an option for large-scale commercial licensing. Stability AI has collaborated with NVIDIA and AMD to enhance performance, resulting in significant efficiency improvements. The model is available for trial via the Stability Platform, Stable Assistant, and Discord&#039;s Stable Artisan. Safety measures have been implemented to prevent misuse, and continuous improvements based on user feedback are planned.&lt;br /&gt;
&lt;br /&gt;
== Breaking News: 4 January 2024‎ ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_3_Medium&amp;diff=277</id>
		<title>Stable Diffusion 3 Medium</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_3_Medium&amp;diff=277"/>
		<updated>2024-06-16T00:14:25Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;== Announcing the Release of Stable Diffusion 3 Medium: Stability AI’s Advanced Image Generation Model ==  Stability AI has introduced Stable Diffusion 3 Medium, the latest and most advanced model in their text-to-image generation series. This release marks a significant milestone in the evolution of generative AI, offering exceptional quality and versatility for both consumer and enterprise applications.  === Key Features ===  Stable Diffusion 3 Medium is a 2 billion...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Announcing the Release of Stable Diffusion 3 Medium: Stability AI’s Advanced Image Generation Model ==&lt;br /&gt;
&lt;br /&gt;
Stability AI has introduced Stable Diffusion 3 Medium, the latest and most advanced model in their text-to-image generation series. This release marks a significant milestone in the evolution of generative AI, offering exceptional quality and versatility for both consumer and enterprise applications.&lt;br /&gt;
&lt;br /&gt;
=== Key Features ===&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium is a 2 billion parameter model known for its:&lt;br /&gt;
&lt;br /&gt;
Overall Quality and Photorealism: Produces images with remarkable detail, color, and lighting, addressing common pitfalls like realism in hands and faces.&lt;br /&gt;
Prompt Understanding: Capable of comprehending long and complex prompts involving spatial reasoning and compositional elements.&lt;br /&gt;
Typography: Delivers high-quality text outputs with fewer errors in spelling, kerning, and spacing.&lt;br /&gt;
Resource Efficiency: Operates effectively on standard consumer GPUs due to its low VRAM footprint.&lt;br /&gt;
Fine-Tuning: Easily customizable with small datasets for specific use cases.&lt;br /&gt;
=== Licensing and Accessibility ===&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium is available under the Stability Non-Commercial Research Community License and a low-cost Creator License, encouraging use by professional artists, designers, and developers. For large-scale commercial use, an Enterprise License can be obtained by contacting Stability AI directly.&lt;br /&gt;
&lt;br /&gt;
=== Collaboration and Optimization ===&lt;br /&gt;
&lt;br /&gt;
In collaboration with NVIDIA and AMD, Stability AI has optimized the model for various GPUs, resulting in significant performance enhancements. NVIDIA® RTX™ GPUs and TensorRT™ provide a 50% increase in performance, while AMD’s latest devices ensure optimized inference.&lt;br /&gt;
&lt;br /&gt;
=== Availability and Trials ===&lt;br /&gt;
&lt;br /&gt;
The model can be accessed through the Stability Platform API, with a free three-day trial available on Stable Assistant and Discord via Stable Artisan. This allows users to explore the capabilities of Stable Diffusion 3 Medium before committing to a license.&lt;br /&gt;
&lt;br /&gt;
=== Commitment to Safety ===&lt;br /&gt;
&lt;br /&gt;
Stability AI emphasizes safe and responsible AI practices. The model has undergone extensive testing to prevent misuse, with safeguards implemented throughout its development and deployment. Continuous collaboration with researchers and the community ensures ongoing improvements and innovation with integrity.&lt;br /&gt;
&lt;br /&gt;
=== Future Plans ===&lt;br /&gt;
&lt;br /&gt;
Stability AI plans to enhance Stable Diffusion 3 Medium based on user feedback, aiming to set new standards in AI-generated art. The community&#039;s input will be crucial in shaping future updates and expanding the model&#039;s features.&lt;br /&gt;
&lt;br /&gt;
For more details, visit Stability AI’s website or join their community on Twitter, Instagram, LinkedIn, and Discord.&lt;br /&gt;
&lt;br /&gt;
== External Links ==&lt;br /&gt;
&lt;br /&gt;
Stable Diffusion 3 Medium Official Release&lt;br /&gt;
Stability Platform API&lt;br /&gt;
Stable Assistant&lt;br /&gt;
Stable Artisan on Discord&lt;br /&gt;
Stability AI Safety Page&lt;br /&gt;
[[Category&lt;br /&gt;
Intelligence]]&lt;br /&gt;
[[Category&lt;br /&gt;
Generation]]&lt;br /&gt;
[[Category&lt;br /&gt;
Releases]]&lt;br /&gt;
[[Category&lt;br /&gt;
AI]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:SD3_Example.png&amp;diff=276</id>
		<title>File:SD3 Example.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:SD3_Example.png&amp;diff=276"/>
		<updated>2024-06-16T00:09:17Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Prompt: photo of three antique dragon glass magic potions in an old abandoned apothecary shop: the first one is blue with the label &amp;quot;1.5&amp;quot;, the second one is red with the label &amp;quot;SDXL&amp;quot;, the third one is green with the label &amp;quot;SD3&amp;quot;.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Denoising_strength&amp;diff=275</id>
		<title>Denoising strength</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Denoising_strength&amp;diff=275"/>
		<updated>2024-01-19T04:44:10Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;== What is denoising strength in stable diffusion? == There is only so far that you can go with Prompt Engineering.  Stable diffusion is a powerful technique for generating realistic and diverse images from text Prompts or input images. It works by gradually transforming a noisy image into a clear one, guided by a Neural network that learns from a large dataset of images. However, stable diffusion also allows users to control how much noise they want to add o...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is denoising strength in stable diffusion? ==&lt;br /&gt;
There is only so far that you can go with [[Prompt Engineering]].  Stable diffusion is a powerful technique for generating realistic and diverse images from text [[Prompts]] or input images. It works by gradually transforming a noisy image into a clear one, guided by a [[Neural network]] that learns from a large dataset of images. However, stable diffusion also allows users to control how much noise they want to add or remove from their input images, using a parameter called denoising strength.&lt;br /&gt;
&lt;br /&gt;
Denoising strength is a value between 0 and 1 that determines how much the output image will be influenced by the input image. A low denoising strength (close to 0) means that the output image will look very similar to the input image, with only minor modifications. A high denoising strength (close to 1) means that the output image will look very different from the input image, with major modifications.&lt;br /&gt;
&lt;br /&gt;
Why would you want to change the denoising strength? Depending on your goal, you might want to use different levels of denoising strength to achieve different effects. For example, if you want to use stable diffusion for inpainting, which is filling in missing or corrupted parts of an image, you might want to use a low denoising strength to preserve the original image as much as possible. On the other hand, if you want to use stable diffusion for image-to-image translation, which is transforming an image from one domain to another (such as turning a photo into a painting), you might want to use a high denoising strength to create more variation and creativity.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Text-to-image]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=274</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=274"/>
		<updated>2024-01-05T03:51:08Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=273</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=273"/>
		<updated>2024-01-05T03:49:23Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=272</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=272"/>
		<updated>2024-01-05T03:48:42Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
[[File:StableVideoDiffusion.gif|left|thumb|370x370px]]&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=271</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=271"/>
		<updated>2024-01-05T03:48:10Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif|left|thumb|370x370px]]&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=270</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=270"/>
		<updated>2024-01-05T03:47:52Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: /* Breaking News: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
[[File:StableVideoDiffusion.gif|left|thumb|370x370px]]&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=269</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=269"/>
		<updated>2024-01-05T03:46:44Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: ==&lt;br /&gt;
- Revolutionary [[Stable Diffusion Video]] Model Ushers in New Era of Text-to-Video Generation&lt;br /&gt;
&lt;br /&gt;
- [[Video-to-video]] is another cutting edge feature of Stable Diffusion&lt;br /&gt;
&lt;br /&gt;
==What is This Page About?==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039;===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039;===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
===&#039;&#039;&#039;[[Models]]&#039;&#039;&#039;===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Creating an Image===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
*&#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039;==== &lt;br /&gt;
&lt;br /&gt;
*Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ==== &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
*&#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
====&#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
*Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
*Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
#&#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
#&#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
=[[Contributing|Contribute]]=&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_Video&amp;diff=268</id>
		<title>Stable Diffusion Video</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_Video&amp;diff=268"/>
		<updated>2024-01-05T03:44:42Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;Date: November 21 2023  In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.  The Stable Video Diffusion model represents a pivotal advancement, a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Date: November 21 2023&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=267</id>
		<title>Video-to-video</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=267"/>
		<updated>2024-01-05T03:41:32Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Video-to-video (V2V) also known as movie-to-movie (m2m) synthesis with stable diffusion refers to a process where an AI model takes an input video and generates a corresponding output video that transforms the original content in a coherent and stable manner. It maintains temporal coherence, meaning the changes from frame to frame in the output are smooth and consistent, avoiding abrupt or unrealistic transitions. This process is often powered by a technology called &amp;quot;stable diffusion.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a breakdown of the terms:&lt;br /&gt;
&lt;br /&gt;
*Video-to-video synthesis: This is the process of transforming one video into another. The transformation can be in terms of style, content, or structure. For example, converting a summer scene into a winter scene, or altering the style to resemble a painting.&lt;br /&gt;
&lt;br /&gt;
*Stable diffusion: Stable diffusion is a concept often used in machine learning and computer vision, particularly in the context of image and video generation or editing. It ensures that the changes or transformations applied to consecutive frames are stable and consistent, preventing jittery or erratic behavior. This is crucial for video because sudden changes from frame to frame can be visually disturbing or unrealistic.&lt;br /&gt;
&lt;br /&gt;
When combined, video-to-video synthesis with stable diffusion aims to produce a new video that is a coherent and visually pleasing transformation of the original video. This technology has a wide range of applications, including in the fields of movie production, video games, virtual reality, and more. It can be used for tasks such as altering the weather or time of day in a video scene, changing the appearance or actions of characters, or even creating entirely new content based on a given input video.&lt;br /&gt;
&lt;br /&gt;
To create a video-to-video (v2v) synthesis using the VideoControlNet framework, you can follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. **Source Video**: Obtain a video you wish to transform. This can be any video, as long as it&#039;s legally acquired and suitable for your project.&lt;br /&gt;
&lt;br /&gt;
2. Preparation of Folders: Create two folders on your computer: one named &#039;Input&#039; and another named &#039;Output&#039;. These will be used to store the original frames and the transformed frames, respectively.&lt;br /&gt;
&lt;br /&gt;
3. Convert Video to Frames: Use a video editing tool or a converter to split your video into individual frames. Save these frames in JPEG format in the &#039;Input&#039; folder. Tools like Adobe Media Encoder or any other video-to-image sequence converter will work.&lt;br /&gt;
&lt;br /&gt;
4. Apply ControlNet**: Now, use the ControlNet settings to guide the transformation process:&lt;br /&gt;
   - First Unit: Apply the Tile/Blur with settings as needed, focusing on the importance of ControlNet in the transformation.&lt;br /&gt;
   - Second Unit: Use TemporalNet with similar settings, emphasizing the role of ControlNet.&lt;br /&gt;
   - You might also experiment with additional styles like Softedge or LineArt if desired.&lt;br /&gt;
&lt;br /&gt;
5. Set Parameters**: Configure the sampling method (typically Euler a), set the sampling steps (around 20 is common), choose the CFG Scale (usually between 3-4), and set the Denoising strength to 1. These parameters control the details and quality of the transformation.&lt;br /&gt;
&lt;br /&gt;
6. Batch Processing: Use an img2img batch process to apply the transformation. Specify the &#039;Input&#039; directory with the frames, the &#039;Output&#039; directory for the transformed frames, and initiate the generation. It&#039;s wise to test a few frames first to ensure that the ControlNet is working as expected.&lt;br /&gt;
&lt;br /&gt;
7. Recompile Frames into Video: Once all frames are transformed and saved in the &#039;Output&#039; folder, use a tool like Adobe Media Encoder to convert them back into a single video file, typically in H.264 format for good compatibility and quality.&lt;br /&gt;
&lt;br /&gt;
8. Enhance Frame Rate: If the resulting video is lower in frames per second (fps) than desired, consider using software like Flowframes to interpolate and increase the fps to a smoother rate, such as 60 fps.&lt;br /&gt;
&lt;br /&gt;
9. Optional Detailing: For enhanced details, especially in facial features, you can use a tool like ADetailer. Note that while this will increase the visual quality, it may also substantially increase the processing time.&lt;br /&gt;
&lt;br /&gt;
By following these steps, you can transform an existing video into a new one with different styles or content, utilizing the video-to-video synthesis capabilities of the VideoControlNet framework. Always ensure to check and adjust the settings as per the specific needs of your project for optimal results.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=266</id>
		<title>Video-to-video</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=266"/>
		<updated>2024-01-05T03:41:03Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Video-to-video synthesis with stable diffusion refers to a process where an AI model takes an input video and generates a corresponding output video that transforms the original content in a coherent and stable manner. It maintains temporal coherence, meaning the changes from frame to frame in the output are smooth and consistent, avoiding abrupt or unrealistic transitions. This process is often powered by a technology called &amp;quot;stable diffusion.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a breakdown of the terms:&lt;br /&gt;
&lt;br /&gt;
*Video-to-video synthesis: This is the process of transforming one video into another. The transformation can be in terms of style, content, or structure. For example, converting a summer scene into a winter scene, or altering the style to resemble a painting.&lt;br /&gt;
&lt;br /&gt;
*Stable diffusion: Stable diffusion is a concept often used in machine learning and computer vision, particularly in the context of image and video generation or editing. It ensures that the changes or transformations applied to consecutive frames are stable and consistent, preventing jittery or erratic behavior. This is crucial for video because sudden changes from frame to frame can be visually disturbing or unrealistic.&lt;br /&gt;
&lt;br /&gt;
When combined, video-to-video synthesis with stable diffusion aims to produce a new video that is a coherent and visually pleasing transformation of the original video. This technology has a wide range of applications, including in the fields of movie production, video games, virtual reality, and more. It can be used for tasks such as altering the weather or time of day in a video scene, changing the appearance or actions of characters, or even creating entirely new content based on a given input video.&lt;br /&gt;
&lt;br /&gt;
To create a video-to-video (v2v) synthesis using the VideoControlNet framework, you can follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. **Source Video**: Obtain a video you wish to transform. This can be any video, as long as it&#039;s legally acquired and suitable for your project.&lt;br /&gt;
&lt;br /&gt;
2. Preparation of Folders: Create two folders on your computer: one named &#039;Input&#039; and another named &#039;Output&#039;. These will be used to store the original frames and the transformed frames, respectively.&lt;br /&gt;
&lt;br /&gt;
3. Convert Video to Frames: Use a video editing tool or a converter to split your video into individual frames. Save these frames in JPEG format in the &#039;Input&#039; folder. Tools like Adobe Media Encoder or any other video-to-image sequence converter will work.&lt;br /&gt;
&lt;br /&gt;
4. Apply ControlNet**: Now, use the ControlNet settings to guide the transformation process:&lt;br /&gt;
   - First Unit: Apply the Tile/Blur with settings as needed, focusing on the importance of ControlNet in the transformation.&lt;br /&gt;
   - Second Unit: Use TemporalNet with similar settings, emphasizing the role of ControlNet.&lt;br /&gt;
   - You might also experiment with additional styles like Softedge or LineArt if desired.&lt;br /&gt;
&lt;br /&gt;
5. Set Parameters**: Configure the sampling method (typically Euler a), set the sampling steps (around 20 is common), choose the CFG Scale (usually between 3-4), and set the Denoising strength to 1. These parameters control the details and quality of the transformation.&lt;br /&gt;
&lt;br /&gt;
6. Batch Processing: Use an img2img batch process to apply the transformation. Specify the &#039;Input&#039; directory with the frames, the &#039;Output&#039; directory for the transformed frames, and initiate the generation. It&#039;s wise to test a few frames first to ensure that the ControlNet is working as expected.&lt;br /&gt;
&lt;br /&gt;
7. Recompile Frames into Video: Once all frames are transformed and saved in the &#039;Output&#039; folder, use a tool like Adobe Media Encoder to convert them back into a single video file, typically in H.264 format for good compatibility and quality.&lt;br /&gt;
&lt;br /&gt;
8. Enhance Frame Rate: If the resulting video is lower in frames per second (fps) than desired, consider using software like Flowframes to interpolate and increase the fps to a smoother rate, such as 60 fps.&lt;br /&gt;
&lt;br /&gt;
9. Optional Detailing: For enhanced details, especially in facial features, you can use a tool like ADetailer. Note that while this will increase the visual quality, it may also substantially increase the processing time.&lt;br /&gt;
&lt;br /&gt;
By following these steps, you can transform an existing video into a new one with different styles or content, utilizing the video-to-video synthesis capabilities of the VideoControlNet framework. Always ensure to check and adjust the settings as per the specific needs of your project for optimal results.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=265</id>
		<title>Video-to-video</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Video-to-video&amp;diff=265"/>
		<updated>2024-01-05T03:32:50Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;Video-to-video synthesis with stable diffusion refers to a process where an AI model takes an input video and generates a corresponding output video that transforms the original content in a coherent and stable manner. It maintains temporal coherence, meaning the changes from frame to frame in the output are smooth and consistent, avoiding abrupt or unrealistic transitions. This process is often powered by a technology called &amp;quot;stable diffusion.&amp;quot;  Here&amp;#039;s a breakdown of th...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Video-to-video synthesis with stable diffusion refers to a process where an AI model takes an input video and generates a corresponding output video that transforms the original content in a coherent and stable manner. It maintains temporal coherence, meaning the changes from frame to frame in the output are smooth and consistent, avoiding abrupt or unrealistic transitions. This process is often powered by a technology called &amp;quot;stable diffusion.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here&#039;s a breakdown of the terms:&lt;br /&gt;
&lt;br /&gt;
*Video-to-video synthesis: This is the process of transforming one video into another. The transformation can be in terms of style, content, or structure. For example, converting a summer scene into a winter scene, or altering the style to resemble a painting.&lt;br /&gt;
&lt;br /&gt;
*Stable diffusion: Stable diffusion is a concept often used in machine learning and computer vision, particularly in the context of image and video generation or editing. It ensures that the changes or transformations applied to consecutive frames are stable and consistent, preventing jittery or erratic behavior. This is crucial for video because sudden changes from frame to frame can be visually disturbing or unrealistic.&lt;br /&gt;
&lt;br /&gt;
When combined, video-to-video synthesis with stable diffusion aims to produce a new video that is a coherent and visually pleasing transformation of the original video. This technology has a wide range of applications, including in the fields of movie production, video games, virtual reality, and more. It can be used for tasks such as altering the weather or time of day in a video scene, changing the appearance or actions of characters, or even creating entirely new content based on a given input video.&lt;br /&gt;
&lt;br /&gt;
To create a video-to-video (v2v) synthesis using the VideoControlNet framework, you can follow these steps:&lt;br /&gt;
&lt;br /&gt;
1. **Source Video**: Obtain a video you wish to transform. This can be any video, as long as it&#039;s legally acquired and suitable for your project.&lt;br /&gt;
&lt;br /&gt;
2. Preparation of Folders: Create two folders on your computer: one named &#039;Input&#039; and another named &#039;Output&#039;. These will be used to store the original frames and the transformed frames, respectively.&lt;br /&gt;
&lt;br /&gt;
3. Convert Video to Frames: Use a video editing tool or a converter to split your video into individual frames. Save these frames in JPEG format in the &#039;Input&#039; folder. Tools like Adobe Media Encoder or any other video-to-image sequence converter will work.&lt;br /&gt;
&lt;br /&gt;
4. Apply ControlNet**: Now, use the ControlNet settings to guide the transformation process:&lt;br /&gt;
   - **First Unit**: Apply the Tile/Blur with settings as needed, focusing on the importance of ControlNet in the transformation.&lt;br /&gt;
   - **Second Unit**: Use TemporalNet with similar settings, emphasizing the role of ControlNet.&lt;br /&gt;
   - You might also experiment with additional styles like Softedge or LineArt if desired.&lt;br /&gt;
&lt;br /&gt;
5. Set Parameters**: Configure the sampling method (typically Euler a), set the sampling steps (around 20 is common), choose the CFG Scale (usually between 3-4), and set the Denoising strength to 1. These parameters control the details and quality of the transformation.&lt;br /&gt;
&lt;br /&gt;
6. Batch Processing: Use an img2img batch process to apply the transformation. Specify the &#039;Input&#039; directory with the frames, the &#039;Output&#039; directory for the transformed frames, and initiate the generation. It&#039;s wise to test a few frames first to ensure that the ControlNet is working as expected.&lt;br /&gt;
&lt;br /&gt;
7. Recompile Frames into Video: Once all frames are transformed and saved in the &#039;Output&#039; folder, use a tool like Adobe Media Encoder to convert them back into a single video file, typically in H.264 format for good compatibility and quality.&lt;br /&gt;
&lt;br /&gt;
8. Enhance Frame Rate: If the resulting video is lower in frames per second (fps) than desired, consider using software like Flowframes to interpolate and increase the fps to a smoother rate, such as 60 fps.&lt;br /&gt;
&lt;br /&gt;
9. Optional Detailing: For enhanced details, especially in facial features, you can use a tool like ADetailer. Note that while this will increase the visual quality, it may also substantially increase the processing time.&lt;br /&gt;
&lt;br /&gt;
By following these steps, you can transform an existing video into a new one with different styles or content, utilizing the video-to-video synthesis capabilities of the VideoControlNet framework. Always ensure to check and adjust the settings as per the specific needs of your project for optimal results.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=264</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=264"/>
		<updated>2024-01-03T19:36:28Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation ==&lt;br /&gt;
Date: November 21 2023&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating an Image ===&lt;br /&gt;
To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
* &#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
* &#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
# &#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=263</id>
		<title>Prompt Engineering</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=263"/>
		<updated>2023-12-28T05:40:12Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:P engineering.png|alt=finding inspiration|right|frameless|546x546px]]&lt;br /&gt;
Prompt Engineering refers to the art and science of crafting effective prompts to guide AI models, particularly in natural language processing (NLP) or image generation tasks, to produce the desired output. It&#039;s a critical skill in working with models like GPT-3, BERT, or image generators like Stable Diffusion. &lt;br /&gt;
&lt;br /&gt;
Understanding prompt engineering is important for several reasons, especially as AI and machine learning models become more integral in various fields, from creative arts to technical problem-solving. Here are the key reasons why knowing about prompt engineering is beneficial:&lt;br /&gt;
&lt;br /&gt;
=== 1. Effective Communication with AI: ===&lt;br /&gt;
&lt;br /&gt;
* Precision in Output: Being skilled in prompt engineering allows you to more accurately guide AI to produce the desired results, reducing time and resources spent on trial and error.&lt;br /&gt;
* Understanding AI Responses: It helps you understand why an AI might respond in a certain way and how to adjust your prompts to correct or improve the outputs.&lt;br /&gt;
&lt;br /&gt;
=== 2. Enhancing Creativity and Productivity: ===&lt;br /&gt;
&lt;br /&gt;
* Creative Exploration: In fields like digital art, marketing, or design, prompt engineering can unlock new levels of creativity and novelty in outputs.&lt;br /&gt;
* Efficiency: It can significantly speed up content creation, idea generation, and problem-solving processes.&lt;br /&gt;
&lt;br /&gt;
=== 3. Quality Control and Reliability: ===&lt;br /&gt;
&lt;br /&gt;
* Consistency: Knowing how to craft prompts means you can achieve more consistent results from the AI, important for professional and commercial applications.&lt;br /&gt;
* Avoiding Errors: It helps prevent misunderstandings or unintended consequences that might arise from poorly constructed prompts, ensuring more reliable and ethical outputs.&lt;br /&gt;
&lt;br /&gt;
=== 4. Tailoring Solutions: ===&lt;br /&gt;
&lt;br /&gt;
* Customization: Different scenarios and tasks require different types of AI interactions. Prompt engineering allows you to customize how you use AI to fit specific needs or contexts.&lt;br /&gt;
* Targeted Results: Whether you&#039;re looking for a specific style in AI-generated art or a particular tone in written content, prompt engineering helps you target those results more precisely.&lt;br /&gt;
&lt;br /&gt;
=== 5. Understanding and Mitigating Bias: ===&lt;br /&gt;
&lt;br /&gt;
* Bias Awareness: Prompt engineering can expose the biases inherent in AI models, helping users understand and possibly mitigate these biases in the outputs.&lt;br /&gt;
* Ethical Considerations: It&#039;s a step towards more ethical use of AI, as understanding prompts can help avoid generating harmful or biased content.&lt;br /&gt;
&lt;br /&gt;
=== 6. Keeping Up with AI Advancements: ===&lt;br /&gt;
&lt;br /&gt;
* Adaptability: As AI models evolve, being proficient in prompt engineering ensures you can adapt to and utilize new models effectively.&lt;br /&gt;
* Competitive Edge: In industries increasingly reliant on AI, skills in prompt engineering can provide a competitive edge, ensuring you&#039;re getting the most out of these technologies.&lt;br /&gt;
&lt;br /&gt;
In summary, knowing about prompt engineering is crucial for anyone looking to interact effectively with AI systems, whether for creative, professional, or personal purposes. It enhances the quality, reliability, and relevance of AI-generated outputs and is an essential skill in navigating and leveraging the growing landscape of artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
== Here are some key aspects of prompt engineering: ==&lt;br /&gt;
&lt;br /&gt;
=== Understanding the Model ===&lt;br /&gt;
Knowing how the AI model interprets and responds to different kinds of inputs is crucial. This involves understanding the data it was trained on and its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
=== Crafting the Prompt: ===&lt;br /&gt;
This involves creating a text input that is designed to lead the model towards generating the desired output. It may involve specific phrasing, style, or including certain keywords or concepts that the model recognizes.&lt;br /&gt;
&lt;br /&gt;
=== Iterative Refinement: ===&lt;br /&gt;
Often, prompt engineering is an iterative process. You might start with a basic prompt, evaluate the output, and then refine the prompt to improve results. This might involve tweaking words, adding context, or changing the structure of the prompt.&lt;br /&gt;
&lt;br /&gt;
=== Optimization: ===&lt;br /&gt;
In addition to refining prompts for better outputs, there&#039;s also an element of optimization. This can involve making prompts that are more computationally efficient, produce more consistent results, or are more likely to succeed across a variety of similar tasks.&lt;br /&gt;
&lt;br /&gt;
=== Ethical Considerations: ===&lt;br /&gt;
Prompt engineering also involves considering the ethical implications of prompts, especially in avoiding biased, offensive, or harmful outputs.&lt;br /&gt;
&lt;br /&gt;
In essence, prompt engineering is about effectively communicating with AI to harness its capabilities, requiring both creativity and technical understanding of the underlying model. It&#039;s a skill that combines aspects of linguistics, psychology, and computer science.&lt;br /&gt;
&lt;br /&gt;
== Tips and Tricks ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Be Descriptive and Detailed: ===&lt;br /&gt;
&lt;br /&gt;
* Specificity: Include specific details like the setting, subject, style, or mood. For example, &amp;quot;a sunny Paris street in the morning&amp;quot; gives more context than just &amp;quot;city street.&amp;quot;&lt;br /&gt;
* Adjectives: Use adjectives to describe textures, colors, and emotions. Words like &amp;quot;glistening,&amp;quot; &amp;quot;somber,&amp;quot; or &amp;quot;vibrant&amp;quot; can significantly alter the outcome.&lt;br /&gt;
&lt;br /&gt;
=== 2. Understand Style and Artists: ===&lt;br /&gt;
&lt;br /&gt;
* Artistic Influence: Reference well-known art styles or artists for inspiration. For example, &amp;quot;in the style of Van Gogh&amp;quot; or &amp;quot;reminiscent of Art Nouveau.&amp;quot;&lt;br /&gt;
* Era and Genre: Specify if you want the image to reflect a particular historical period or artistic genre.&lt;br /&gt;
&lt;br /&gt;
=== 3. Use Creative Constraints: ===&lt;br /&gt;
&lt;br /&gt;
* Composition: Guide the composition by mentioning specific elements placement like &amp;quot;a cat on the right corner of a room.&amp;quot;&lt;br /&gt;
* Lighting and Perspective: Mention if you want a particular type of lighting (e.g., &amp;quot;backlit,&amp;quot; &amp;quot;dramatic shadows&amp;quot;) or perspective (e.g., &amp;quot;bird&#039;s eye view&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
=== 4. Experiment with Iteration and Variation: ===&lt;br /&gt;
&lt;br /&gt;
* Iteration: Don&#039;t hesitate to refine and rephrase prompts based on the outputs you get.&lt;br /&gt;
* Variations: Try synonyms or alternate descriptions to see how slight changes can lead to different results.&lt;br /&gt;
&lt;br /&gt;
=== 5. Consider the Model&#039;s Limitations and Biases: ===&lt;br /&gt;
&lt;br /&gt;
* Training Data: Understand that the model&#039;s outputs are based on its training data, which might have inherent biases or gaps.&lt;br /&gt;
* Avoiding Undesired Outputs: Be cautious with wording to avoid prompting images that might be unexpected or inappropriate.&lt;br /&gt;
&lt;br /&gt;
=== 6. Leverage Keywords and Syntax: ===&lt;br /&gt;
&lt;br /&gt;
* Keywords: Certain keywords might trigger specific styles or elements due to the model&#039;s training. Experimenting with different terms can yield interesting results.&lt;br /&gt;
* Syntax: The order of words and the way the prompt is structured can influence the outcome. For example, placing the most important elements at the beginning of the prompt might emphasize them in the generated image.&lt;br /&gt;
&lt;br /&gt;
=== 7. Balance Ambiguity and Precision: ===&lt;br /&gt;
&lt;br /&gt;
* Ambiguity: Sometimes being less specific can yield creative and surprising results, especially if you&#039;re exploring ideas.&lt;br /&gt;
* Precision: For more targeted outputs, be as precise and unambiguous as possible.&lt;br /&gt;
&lt;br /&gt;
as possible.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=262</id>
		<title>Prompt Engineering</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=262"/>
		<updated>2023-12-28T04:56:52Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:P engineering.png|alt=finding inspiration|right|frameless|546x546px]]&lt;br /&gt;
Prompt Engineering refers to the art and science of crafting effective prompts to guide AI models, particularly in natural language processing (NLP) or image generation tasks, to produce the desired output. It&#039;s a critical skill in working with models like GPT-3, BERT, or image generators like Stable Diffusion. Here are some key aspects of prompt engineering:&lt;br /&gt;
&lt;br /&gt;
=== Understanding the Model ===&lt;br /&gt;
Knowing how the AI model interprets and responds to different kinds of inputs is crucial. This involves understanding the data it was trained on and its capabilities and limitations.&lt;br /&gt;
&lt;br /&gt;
=== Crafting the Prompt: ===&lt;br /&gt;
This involves creating a text input that is designed to lead the model towards generating the desired output. It may involve specific phrasing, style, or including certain keywords or concepts that the model recognizes.&lt;br /&gt;
&lt;br /&gt;
=== Iterative Refinement: ===&lt;br /&gt;
Often, prompt engineering is an iterative process. You might start with a basic prompt, evaluate the output, and then refine the prompt to improve results. This might involve tweaking words, adding context, or changing the structure of the prompt.&lt;br /&gt;
&lt;br /&gt;
=== Optimization: ===&lt;br /&gt;
In addition to refining prompts for better outputs, there&#039;s also an element of optimization. This can involve making prompts that are more computationally efficient, produce more consistent results, or are more likely to succeed across a variety of similar tasks.&lt;br /&gt;
&lt;br /&gt;
=== Ethical Considerations: ===&lt;br /&gt;
Prompt engineering also involves considering the ethical implications of prompts, especially in avoiding biased, offensive, or harmful outputs.&lt;br /&gt;
&lt;br /&gt;
In essence, prompt engineering is about effectively communicating with AI to harness its capabilities, requiring both creativity and technical understanding of the underlying model. It&#039;s a skill that combines aspects of linguistics, psychology, and computer science.&lt;br /&gt;
&lt;br /&gt;
== Tips and Tricks ==&lt;br /&gt;
&lt;br /&gt;
=== 1. Be Descriptive and Detailed: ===&lt;br /&gt;
&lt;br /&gt;
* Specificity: Include specific details like the setting, subject, style, or mood. For example, &amp;quot;a sunny Paris street in the morning&amp;quot; gives more context than just &amp;quot;city street.&amp;quot;&lt;br /&gt;
* Adjectives: Use adjectives to describe textures, colors, and emotions. Words like &amp;quot;glistening,&amp;quot; &amp;quot;somber,&amp;quot; or &amp;quot;vibrant&amp;quot; can significantly alter the outcome.&lt;br /&gt;
&lt;br /&gt;
=== 2. Understand Style and Artists: ===&lt;br /&gt;
&lt;br /&gt;
* Artistic Influence: Reference well-known art styles or artists for inspiration. For example, &amp;quot;in the style of Van Gogh&amp;quot; or &amp;quot;reminiscent of Art Nouveau.&amp;quot;&lt;br /&gt;
* Era and Genre: Specify if you want the image to reflect a particular historical period or artistic genre.&lt;br /&gt;
&lt;br /&gt;
=== 3. Use Creative Constraints: ===&lt;br /&gt;
&lt;br /&gt;
* Composition: Guide the composition by mentioning specific elements placement like &amp;quot;a cat on the right corner of a room.&amp;quot;&lt;br /&gt;
* Lighting and Perspective: Mention if you want a particular type of lighting (e.g., &amp;quot;backlit,&amp;quot; &amp;quot;dramatic shadows&amp;quot;) or perspective (e.g., &amp;quot;bird&#039;s eye view&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
=== 4. Experiment with Iteration and Variation: ===&lt;br /&gt;
&lt;br /&gt;
* Iteration: Don&#039;t hesitate to refine and rephrase prompts based on the outputs you get.&lt;br /&gt;
* Variations: Try synonyms or alternate descriptions to see how slight changes can lead to different results.&lt;br /&gt;
&lt;br /&gt;
=== 5. Consider the Model&#039;s Limitations and Biases: ===&lt;br /&gt;
&lt;br /&gt;
* Training Data: Understand that the model&#039;s outputs are based on its training data, which might have inherent biases or gaps.&lt;br /&gt;
* Avoiding Undesired Outputs: Be cautious with wording to avoid prompting images that might be unexpected or inappropriate.&lt;br /&gt;
&lt;br /&gt;
=== 6. Leverage Keywords and Syntax: ===&lt;br /&gt;
&lt;br /&gt;
* Keywords: Certain keywords might trigger specific styles or elements due to the model&#039;s training. Experimenting with different terms can yield interesting results.&lt;br /&gt;
* Syntax: The order of words and the way the prompt is structured can influence the outcome. For example, placing the most important elements at the beginning of the prompt might emphasize them in the generated image.&lt;br /&gt;
&lt;br /&gt;
=== 7. Balance Ambiguity and Precision: ===&lt;br /&gt;
&lt;br /&gt;
* Ambiguity: Sometimes being less specific can yield creative and surprising results, especially if you&#039;re exploring ideas.&lt;br /&gt;
* Precision: For more targeted outputs, be as precise and unambiguous as possible.&lt;br /&gt;
&lt;br /&gt;
as possible.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=261</id>
		<title>Prompt Engineering</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=261"/>
		<updated>2023-12-28T04:54:42Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:P engineering.png|alt=finding inspiration|right|frameless|546x546px]]&lt;br /&gt;
Prompt Engineering refers to the art and science of crafting effective prompts to guide AI models, particularly in natural language processing (NLP) or image generation tasks, to produce the desired output. It&#039;s a critical skill in working with models like GPT-3, BERT, or image generators like Stable Diffusion. Here are some key aspects of prompt engineering:&lt;br /&gt;
&lt;br /&gt;
# Understanding the Model: Knowing how the AI model interprets and responds to different kinds of inputs is crucial. This involves understanding the data it was trained on and its capabilities and limitations.&lt;br /&gt;
# Crafting the Prompt: This involves creating a text input that is designed to lead the model towards generating the desired output. It may involve specific phrasing, style, or including certain keywords or concepts that the model recognizes.&lt;br /&gt;
# Iterative Refinement: Often, prompt engineering is an iterative process. You might start with a basic prompt, evaluate the output, and then refine the prompt to improve results. This might involve tweaking words, adding context, or changing the structure of the prompt.&lt;br /&gt;
# Optimization: In addition to refining prompts for better outputs, there&#039;s also an element of optimization. This can involve making prompts that are more computationally efficient, produce more consistent results, or are more likely to succeed across a variety of similar tasks.&lt;br /&gt;
# Ethical Considerations: Prompt engineering also involves considering the ethical implications of prompts, especially in avoiding biased, offensive, or harmful outputs.&lt;br /&gt;
&lt;br /&gt;
In essence, prompt engineering is about effectively communicating with AI to harness its capabilities, requiring both creativity and technical understanding of the underlying model. It&#039;s a skill that combines aspects of linguistics, psychology, and computer science.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Be Descriptive and Detailed:&lt;br /&gt;
Specificity: Include specific details like the setting, subject, style, or mood. For example, &amp;quot;a sunny Paris street in the morning&amp;quot; gives more context than just &amp;quot;city street.&amp;quot;&lt;br /&gt;
Adjectives: Use adjectives to describe textures, colors, and emotions. Words like &amp;quot;glistening,&amp;quot; &amp;quot;somber,&amp;quot; or &amp;quot;vibrant&amp;quot; can significantly alter the outcome.&lt;br /&gt;
2. Understand Style and Artists:&lt;br /&gt;
Artistic Influence: Reference well-known art styles or artists for inspiration. For example, &amp;quot;in the style of Van Gogh&amp;quot; or &amp;quot;reminiscent of Art Nouveau.&amp;quot;&lt;br /&gt;
Era and Genre: Specify if you want the image to reflect a particular historical period or artistic genre.&lt;br /&gt;
3. Use Creative Constraints:&lt;br /&gt;
Composition: Guide the composition by mentioning specific elements placement like &amp;quot;a cat on the right corner of a room.&amp;quot;&lt;br /&gt;
Lighting and Perspective: Mention if you want a particular type of lighting (e.g., &amp;quot;backlit,&amp;quot; &amp;quot;dramatic shadows&amp;quot;) or perspective (e.g., &amp;quot;bird&#039;s eye view&amp;quot;).&lt;br /&gt;
4. Experiment with Iteration and Variation:&lt;br /&gt;
Iteration: Don&#039;t hesitate to refine and rephrase prompts based on the outputs you get.&lt;br /&gt;
Variations: Try synonyms or alternate descriptions to see how slight changes can lead to different results.&lt;br /&gt;
5. Consider the Model&#039;s Limitations and Biases:&lt;br /&gt;
Training Data: Understand that the model&#039;s outputs are based on its training data, which might have inherent biases or gaps.&lt;br /&gt;
Avoiding Undesired Outputs: Be cautious with wording to avoid prompting images that might be unexpected or inappropriate.&lt;br /&gt;
6. Leverage Keywords and Syntax:&lt;br /&gt;
Keywords: Certain keywords might trigger specific styles or elements due to the model&#039;s training. Experimenting with different terms can yield interesting results.&lt;br /&gt;
Syntax: The order of words and the way the prompt is structured can influence the outcome. For example, placing the most important elements at the beginning of the prompt might emphasize them in the generated image.&lt;br /&gt;
7. Balance Ambiguity and Precision:&lt;br /&gt;
Ambiguity: Sometimes being less specific can yield creative and surprising results, especially if you&#039;re exploring ideas.&lt;br /&gt;
Precision: For more targeted outputs, be as precise and unambiguous as possible.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=260</id>
		<title>Prompt Engineering</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompt_Engineering&amp;diff=260"/>
		<updated>2023-12-28T04:49:55Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;546x546px Prompt Engineering refers to the art and science of crafting effective prompts to guide AI models, particularly in natural language processing (NLP) or image generation tasks, to produce the desired output. It&amp;#039;s a critical skill in working with models like GPT-3, BERT, or image generators like Stable Diffusion. Here are some key aspects of prompt engineering:  # Understanding the Model: Knowing...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:P engineering.png|alt=finding inspiration|right|frameless|546x546px]]&lt;br /&gt;
Prompt Engineering refers to the art and science of crafting effective prompts to guide AI models, particularly in natural language processing (NLP) or image generation tasks, to produce the desired output. It&#039;s a critical skill in working with models like GPT-3, BERT, or image generators like Stable Diffusion. Here are some key aspects of prompt engineering:&lt;br /&gt;
&lt;br /&gt;
# Understanding the Model: Knowing how the AI model interprets and responds to different kinds of inputs is crucial. This involves understanding the data it was trained on and its capabilities and limitations.&lt;br /&gt;
# Crafting the Prompt: This involves creating a text input that is designed to lead the model towards generating the desired output. It may involve specific phrasing, style, or including certain keywords or concepts that the model recognizes.&lt;br /&gt;
# Iterative Refinement: Often, prompt engineering is an iterative process. You might start with a basic prompt, evaluate the output, and then refine the prompt to improve results. This might involve tweaking words, adding context, or changing the structure of the prompt.&lt;br /&gt;
# Optimization: In addition to refining prompts for better outputs, there&#039;s also an element of optimization. This can involve making prompts that are more computationally efficient, produce more consistent results, or are more likely to succeed across a variety of similar tasks.&lt;br /&gt;
# Ethical Considerations: Prompt engineering also involves considering the ethical implications of prompts, especially in avoiding biased, offensive, or harmful outputs.&lt;br /&gt;
&lt;br /&gt;
In essence, prompt engineering is about effectively communicating with AI to harness its capabilities, requiring both creativity and technical understanding of the underlying model. It&#039;s a skill that combines aspects of linguistics, psychology, and computer science.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:P_engineering.png&amp;diff=259</id>
		<title>File:P engineering.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:P_engineering.png&amp;diff=259"/>
		<updated>2023-12-28T04:48:20Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;p_engineering&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompts&amp;diff=258</id>
		<title>Prompts</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompts&amp;diff=258"/>
		<updated>2023-12-28T04:41:35Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Creating good Stable Diffusion prompts is about half of the game in generating a good image.  Here, we&#039;re going to take a look at all of the things it takes to generate a good image with proper [[Prompt Engineering]].&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=257</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=257"/>
		<updated>2023-12-28T04:40:16Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To create an image using Stable Diffusion, you&#039;ll typically follow a process involving setting up the necessary software environment, obtaining the model, and then using a specific prompt to generate your image. Here&#039;s a more detailed breakdown:&lt;br /&gt;
[[File:EnvironmnetSetup.png|left|thumb|194x194px]]&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;1. Environment Setup:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Hardware Requirements&#039;&#039;&#039;: A capable GPU is highly recommended due to the computational demands of the model.&lt;br /&gt;
* &#039;&#039;&#039;Software Requirements&#039;&#039;&#039;: You&#039;ll need Python installed on your system, along with package managers like pip to install necessary libraries.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;2. Install Dependencies:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Install necessary Python libraries, typically including torch (a deep learning framework), transformers, and other dependencies specified in the Stable Diffusion documentation.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;3. Obtain the Model:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Download Stable Diffusion&#039;&#039;&#039;: Access the model from a reputable source or platform offering the   pre-trained Stable Diffusion model.&lt;br /&gt;
* &#039;&#039;&#039;Load the Model&#039;&#039;&#039;: Use coding scripts or tools to load the model into your environment.&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;4. Prepare Your Prompt:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Decide on a text prompt that describes the image you want to generate. Be as descriptive and specific as possible to guide the model toward your     desired output.&lt;br /&gt;
&lt;br /&gt;
[[File:PromptEngineering.png|alt=Prompt Engineering|center|thumb|PromptEngineering]]&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;5. Image Generation:&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
* Use a script or  tool interface to input your prompt to the model. The model will then     process the prompt and generate an image based on the learned patterns and     correlations in its training data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6. Output and Refinement:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Once the image is generated, you can view and save it. If it&#039;s not quite what you wanted, you might adjust your prompt or use different settings and try again.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7. Consider Legal and Ethical Implications:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Be mindful of     copyright and ethical considerations, especially when generating images     for public use or commercial purposes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Tools and Platforms:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
There are various platforms and interfaces available that make using Stable Diffusion easier, including web interfaces where you can simply enter your prompt and receive an image, or more hands-on approaches where you control every aspect via scripting.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Example Using a Platform or Tool:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Find a Platform&#039;&#039;&#039;: Websites and     applications exist that offer user-friendly interfaces for Stable     Diffusion.&lt;br /&gt;
# &#039;&#039;&#039;Enter Your Prompt&#039;&#039;&#039;: Simply type     in what you want the image to depict.&lt;br /&gt;
# &#039;&#039;&#039;Generate and Download&#039;&#039;&#039;: Click to     generate the image, then view and download the result.&lt;br /&gt;
&lt;br /&gt;
In summary, making a Stable Diffusion image involves setting up the right environment, obtaining and loading the model, crafting a descriptive text prompt, and then using that prompt to generate an image. The exact steps can vary based on your technical background and the tools you choose to use.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation ==&lt;br /&gt;
Date: November 21 2023&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;br /&gt;
&lt;br /&gt;
== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:PromptEngineering.png&amp;diff=256</id>
		<title>File:PromptEngineering.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:PromptEngineering.png&amp;diff=256"/>
		<updated>2023-12-28T04:39:32Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:EnvironmnetSetup.png&amp;diff=255</id>
		<title>File:EnvironmnetSetup.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:EnvironmnetSetup.png&amp;diff=255"/>
		<updated>2023-12-28T04:34:39Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=254</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=254"/>
		<updated>2023-11-30T00:03:15Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation ==&lt;br /&gt;
Date: November 21 2023&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;br /&gt;
&lt;br /&gt;
== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=253</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=253"/>
		<updated>2023-11-29T23:58:06Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: /* Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation ==&lt;br /&gt;
Date: November 21&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;br /&gt;
&lt;br /&gt;
== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=252</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=252"/>
		<updated>2023-11-29T23:57:54Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Breaking News: Revolutionary &amp;quot;Stable Video Diffusion&amp;quot; Model Ushers in New Era of Text-to-Video Generation ==&lt;br /&gt;
Date: November 21&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;br /&gt;
&lt;br /&gt;
== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:StableVideoDiffusion.gif&amp;diff=251</id>
		<title>File:StableVideoDiffusion.gif</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:StableVideoDiffusion.gif&amp;diff=251"/>
		<updated>2023-11-29T23:56:45Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Courtesy of Stability AI Image found on HuggingFace&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Privacy_Policy&amp;diff=249</id>
		<title>Privacy Policy</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Privacy_Policy&amp;diff=249"/>
		<updated>2023-09-22T05:41:12Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot; =Last Updated: 09/21/2023= ==Introduction== Welcome to Stable Diffusion Wiki. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website [your website URL], and any other media form, media channel, or mobile website related (collectively, the &amp;quot;Site&amp;quot;). Please read this Privacy Policy carefully. By using the Site, you consent to the practices described in this policy. ==Information We Collect== ===Personal Informa...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
=Last Updated: 09/21/2023=&lt;br /&gt;
==Introduction==&lt;br /&gt;
Welcome to Stable Diffusion Wiki. This Privacy Policy explains how we collect, use, disclose, and safeguard your information when you visit our website [your website URL], and any other media form, media channel, or mobile website related (collectively, the &amp;quot;Site&amp;quot;). Please read this Privacy Policy carefully. By using the Site, you consent to the practices described in this policy.&lt;br /&gt;
==Information We Collect==&lt;br /&gt;
===Personal Information===&lt;br /&gt;
When you interact with our Site, we may collect personally identifiable information including, but not limited to:&lt;br /&gt;
*Name&lt;br /&gt;
*Email Address&lt;br /&gt;
*Phone Number&lt;br /&gt;
===Non-Personal Information===&lt;br /&gt;
We also collect non-personal information such as:&lt;br /&gt;
*Browser type and version&lt;br /&gt;
*Operating system&lt;br /&gt;
*IP Address&lt;br /&gt;
==How We Use Your Information==&lt;br /&gt;
We use the information we collect for various purposes, including to:&lt;br /&gt;
*Provide and maintain our Site&lt;br /&gt;
*Improve user experience&lt;br /&gt;
*Communicate with you, including newsletters or promotional materials&lt;br /&gt;
*Enforce our terms and conditions&lt;br /&gt;
*Comply with legal obligations&lt;br /&gt;
==Disclosure of Your Information==&lt;br /&gt;
We do not sell, trade, or otherwise transfer to outside parties your personally identifiable information unless we provide you with advance notice. This does not include trusted third parties who assist us in operating our website, so long as those parties agree to keep this information confidential.&lt;br /&gt;
==Security of Your Information==&lt;br /&gt;
We implement a variety of security measures to safeguard your personal information.&lt;br /&gt;
==Third-Party Links==&lt;br /&gt;
Our Site may contain links to other websites. We are not responsible for the content or privacy practices of such other sites.&lt;br /&gt;
==Changes to This Privacy Policy==&lt;br /&gt;
We reserve the right to change this Privacy Policy at any time. Any changes will be posted on this page with an updated revision date.&lt;br /&gt;
==Contact Us==&lt;br /&gt;
If you have any questions about this Privacy Policy, please contact us at robert@stablediffusionwiki.com.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=248</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=248"/>
		<updated>2023-09-22T04:54:11Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Cake by emb.jpg|left|thumb|An image generated with Stable Diffusion. Image generated by user emb on Civitai.]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;br /&gt;
[[File:SDW- Stable Diffusion Wiki.png|center|Stable Diffusion Wiki]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:Cake_by_emb.jpg&amp;diff=247</id>
		<title>File:Cake by emb.jpg</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:Cake_by_emb.jpg&amp;diff=247"/>
		<updated>2023-09-22T04:53:02Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stable Diffusion image generated by user emb on Civitai.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:SDW-_Stable_Diffusion_Wiki.png&amp;diff=246</id>
		<title>File:SDW- Stable Diffusion Wiki.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:SDW-_Stable_Diffusion_Wiki.png&amp;diff=246"/>
		<updated>2023-09-22T04:49:22Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This image was created using the lineart model for the ControlNet extension.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Prompts&amp;diff=245</id>
		<title>Prompts</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Prompts&amp;diff=245"/>
		<updated>2023-09-18T02:28:55Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot;Creating good Stable Diffusion prompts is about half of the game in generating a good image.  Here, we&amp;#039;re going to take a look at all of the things it takes to generate a good image with proper prompt engineering.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Creating good Stable Diffusion prompts is about half of the game in generating a good image.  Here, we&#039;re going to take a look at all of the things it takes to generate a good image with proper prompt engineering.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=244</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Main_Page&amp;diff=244"/>
		<updated>2023-09-18T02:22:50Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Welcome to &#039;&#039;&#039;Stable Diffusion Wiki&#039;&#039;&#039;! ==&lt;br /&gt;
[[File:ProtovisionXLHighFidelity3D beta0411Bakedvae RAW analog shot of a 1girl floating in a multi-colored liquid neural network organism, more details, hyper detailed, t.jpeg|right|frameless|503x503px|Photo generated via SD protovisionXL modele by user kashyyyk]]Hello and welcome to &#039;&#039;&#039;&amp;lt;u&amp;gt;Stable Diffusion Wiki&amp;lt;/u&amp;gt;&#039;&#039;&#039;! We are excited to share information, collaborate, and build a knowledge base on Stable Diffusion and image generation. Whether you&#039;re a seasoned creator in the world of machine learning and text-to-image generators, or newly discovering it all for the first time, we&#039;re here to serve!&amp;lt;div style=&amp;quot;float:left; width:15%;&amp;quot;&amp;gt;&lt;br /&gt;
== Join Us ==&lt;br /&gt;
[http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Special%3ACreateAccoun Join the Stable Diffusion community! Register now to explore cutting-edge information, engage with experts, and contribute to the ever-growing knowledge base on this innovative software.]&lt;br /&gt;
&lt;br /&gt;
__TOC__&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right; width:80%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==What is Stable Diffusion?==&lt;br /&gt;
[[File:Clownfish.png|thumb|An image generated with Stable Diffusion via Stability AI website.|left]]&lt;br /&gt;
[[Stable Diffusion]] is a pioneering text-to-image model developed by [[Stability AI]], allowing the conversion of textual descriptions into corresponding visual imagery. In other words, you tell it what you want, and it will create an image or a group of images that fit your description.  One of the biggest distinguishing features about Stable Diffusion is that it is completely open source, unlike many other models.  It can be hosted completely on your PC offline.  This offers some confidentiality, customization, and agency, It also has a thriving community, that are constantly modifying and iterating different models.&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What is This Page About? ==&lt;br /&gt;
[[File:Automatic1111 gui.png|thumb|Automatic1111 gui]]&lt;br /&gt;
As a fellow hobbyist myself, I found the overall subject to have a lot of terms and processes that were very foreign to me.  I found myself doing a lot of research to figure out what to do. I was going to various sources and collecting articles, images and just taking notes on tips, and tricks.  Since stable diffusion art is still new to the world, the process and technology has not been perfected yet. As my notes and materials grew, I realized there must be others out there like myself.  It just made more sense to store this information online so, hopefully it can help others who are experiencing the same thing.  &lt;br /&gt;
&lt;br /&gt;
This page serves as a space for us to gather and organize information about Stable Diffusion. If you are asking yourself, what&#039;s a [[LoRA]]? What is [[ControlNet]]? We&#039;re here to help with that.  Whether it&#039;s a hobby, a professional field, a community project, or anything else, this is the place to explore, learn, and contribute your knowledge.  The goal for this site is to create a [http://stablediffusionwiki.com/index.php?title=Special:CreateAccount&amp;amp;returnto=Contributing community of users] that can share their information all related to image generation via Stable Diffusion.  This will walk users through all aspects of the process of creating and editing images, and provide technical information related to it.  There are a lot of individual pieces to this entire process and each component has their own complex manuals, and processes.  As a user of the tool, it can be overwhelming at first to try to piece everything together. &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Getting Started ==&lt;br /&gt;
[[File:robotknight.jpeg|left|thumb|An image generated with Stable Diffusion]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;What is [[Stable Diffusion]]?&#039;&#039;&#039; ===&lt;br /&gt;
First you will need to build an understanding of what Stable Diffusion is and what [[Text-to-Image AI]].  For beginners, you don&#039;t need to worry too much about the details of the how.  However, it will help you as you learn more about it for the sake of improving your images.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Required Software for Stable Diffusion|Required Software]]&#039;&#039;&#039; ===&lt;br /&gt;
Although Stable Diffusion is completely free, it will require some effort on the front end to assemble everything together. It helps if you understand a little bit of Python but it isn&#039;t an absolute requirement. This will include acquiring Python, Git, and some sort of GUI.  Many people prefer using Automatic1111. &lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Text-to-image|Text-to-Image]]&#039;&#039;&#039; ===&lt;br /&gt;
If you have your stable diffusion software all set up, you will want to know how to use stable diffusion and begin generating your image.  Experimenting with different prompts is a major part of the image generation process.  The most basic approach initially will be to begin with txt2img, however you will quickly see that that is just scratching the surface and want to venture into other techniques such as img2img.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;[[Models]]&#039;&#039;&#039; ===&lt;br /&gt;
Although stable diffusion released the base model, there have been many more pruned models released in recent months, and other models such as a [[lora]] and embeddings&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= [[Contributing|Contribute]] =&lt;br /&gt;
Stable Diffusion is a very complex topic and it took many people to develop.  Likewise, this website cannot be built by one person alone.  The task of gathering and recording all relevant information to such a rapidly growing subject manner requires many people to do it properly.  [[Contributing|Click here]] to learn how you can contribute to the growth of this website!&amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=243</id>
		<title>After Detailer</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=243"/>
		<updated>2023-09-17T05:36:24Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Adetailer example.png|thumb|556x556px|Courtesy of &amp;quot;AI is in wonderland&amp;quot; Youtube Channel]]&lt;br /&gt;
&lt;br /&gt;
== What is adetailer? ==&lt;br /&gt;
&#039;&#039;&#039;After Detailer&#039;&#039;&#039; (also known as &#039;&#039;&#039;adetailer&#039;&#039;&#039;) is an extension for &#039;&#039;&#039;enhancing image details&#039;&#039;&#039;. Most noteably, particular parts of the body.  Since we spend a lot of time in our lives looking at the face, it is an area that needs particular attention when generating an image because the viewer is able to pick up on the smallest flaws.  No additional downloads are required post-initial installation. T&lt;br /&gt;
&lt;br /&gt;
The extension will automatically detect the body part based on the model that is selected, and then improve that area.&lt;br /&gt;
&lt;br /&gt;
== How to get started ==&lt;br /&gt;
If you&#039;d like to get started with adetailer, you&#039;ll have to install the [[Extensions|extension]] first.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Models: ==&lt;br /&gt;
The extension features specialized models in &amp;lt;u&amp;gt;three key categories&amp;lt;/u&amp;gt;: &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Face&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Hand&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;- Person&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; are both models from the &amp;lt;code&amp;gt;adetailer&amp;lt;/code&amp;gt; repository on Hugging Face1. Both models are used for 2D realistic face detection. The main difference between the two is that &amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; is more accurate in detecting faces than &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt;. Regarding their performance,&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; has a higher mean average precision (mAP) of 0.713 at an intersection over union (IoU) threshold of 0.50 and 0.404 at an IoU threshold of 0.50-0.95, while &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; has a mAP of 0.660 at an IoU threshold of 0.50 and 0.366 at an IoU threshold of 0.50-0.951.&lt;br /&gt;
&lt;br /&gt;
[[File:Adetailer example2.png|center|Credit: &amp;quot;AI is in wonderland&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
== Grasping the Nuances of Detection and Mask Parameters ==&lt;br /&gt;
Don&#039;t sweat it if you&#039;re new to aDetailer! Though the defaults are pretty good, understanding what each setting does can be a game-changer:&lt;br /&gt;
&lt;br /&gt;
=== Detection Model Confidence Threshold ===&lt;br /&gt;
This is your go-to setting for deciding the minimum confidence level the model needs to flag something. Looking to capture more faces? Go ahead and lower this threshold (say, to around 0.3). Tweak this to get more or fewer detections as you see fit.&lt;br /&gt;
&lt;br /&gt;
=== Mask Min/Max Area Ratio ===&lt;br /&gt;
Ever annoyed by tiny, irrelevant objects getting picked up? Adjusting the minimum area ratio can help you weed those out. This setting basically tells the model what size range is cool for masks.&lt;br /&gt;
&lt;br /&gt;
== Diving into Inpainting Settings ==&lt;br /&gt;
When it comes to inpainting, &amp;quot;Inpaint denoising strength&amp;quot; is your MVP. It controls how much denoising happens during the inpainting process. Tweak it until you like what you see.&lt;br /&gt;
&lt;br /&gt;
In most scenarios, you&#039;ll probably want to stick with &amp;quot;Inpaint only masked&amp;quot; if you&#039;re inpainting faces. It&#039;s generally the way to go.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:Adetailer_example2.png&amp;diff=242</id>
		<title>File:Adetailer example2.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:Adetailer_example2.png&amp;diff=242"/>
		<updated>2023-09-17T05:35:43Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Courtesy of AI is in wonderland Youtube Channel&lt;br /&gt;
https://www.youtube.com/watch?v=sF3POwPUWCE&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=241</id>
		<title>After Detailer</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=241"/>
		<updated>2023-09-17T05:31:15Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Adetailer example.png|thumb|556x556px|Courtesy of &amp;quot;AI is in wonderland&amp;quot; Youtube Channel]]&lt;br /&gt;
&lt;br /&gt;
== What is adetailer? ==&lt;br /&gt;
&#039;&#039;&#039;After Detailer&#039;&#039;&#039; (also known as &#039;&#039;&#039;adetailer&#039;&#039;&#039;) is an extension for &#039;&#039;&#039;enhancing image details&#039;&#039;&#039;. Most noteably, particular parts of the body.  Since we spend a lot of time in our lives looking at the face, it is an area that needs particular attention when generating an image because the viewer is able to pick up on the smallest flaws.  No additional downloads are required post-initial installation. The extension features specialized models in three key categories: &#039;&#039;&#039;Face&#039;&#039;&#039;, &#039;&#039;&#039;Hand&#039;&#039;&#039;, and &#039;&#039;&#039;Person&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The extension will automatically detect the body part based on the model that is selected, and then improve that area.&lt;br /&gt;
&lt;br /&gt;
== How to get started ==&lt;br /&gt;
If you&#039;d like to get started with adetailer, you&#039;ll have to install the extension first.  If you&#039;d like more information on how to in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Grasping the Nuances of Detection and Mask Parameters ==&lt;br /&gt;
Don&#039;t sweat it if you&#039;re new to aDetailer! Though the defaults are pretty good, understanding what each setting does can be a game-changer:&lt;br /&gt;
&lt;br /&gt;
=== Detection Model Confidence Threshold ===&lt;br /&gt;
This is your go-to setting for deciding the minimum confidence level the model needs to flag something. Looking to capture more faces? Go ahead and lower this threshold (say, to around 0.3). Tweak this to get more or fewer detections as you see fit.&lt;br /&gt;
&lt;br /&gt;
=== Mask Min/Max Area Ratio ===&lt;br /&gt;
Ever annoyed by tiny, irrelevant objects getting picked up? Adjusting the minimum area ratio can help you weed those out. This setting basically tells the model what size range is cool for masks.&lt;br /&gt;
&lt;br /&gt;
== Diving into Inpainting Settings ==&lt;br /&gt;
When it comes to inpainting, &amp;quot;Inpaint denoising strength&amp;quot; is your MVP. It controls how much denoising happens during the inpainting process. Tweak it until you like what you see.&lt;br /&gt;
&lt;br /&gt;
In most scenarios, you&#039;ll probably want to stick with &amp;quot;Inpaint only masked&amp;quot; if you&#039;re inpainting faces. It&#039;s generally the way to go.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Models:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; are both models from the &amp;lt;code&amp;gt;adetailer&amp;lt;/code&amp;gt; repository on Hugging Face1. Both models are used for 2D realistic face detection. The main difference between the two is that &amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; is more accurate in detecting faces than &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt;. Regarding their performance,&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; has a higher mean average precision (mAP) of 0.713 at an intersection over union (IoU) threshold of 0.50 and 0.404 at an IoU threshold of 0.50-0.95, while &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; has a mAP of 0.660 at an IoU threshold of 0.50 and 0.366 at an IoU threshold of 0.50-0.951.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:Adetailer_example.png&amp;diff=240</id>
		<title>File:Adetailer example.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:Adetailer_example.png&amp;diff=240"/>
		<updated>2023-09-17T05:30:32Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Courtesy of AI is in wonderland Youtube Channel&lt;br /&gt;
https://www.youtube.com/watch?v=sF3POwPUWCE&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=239</id>
		<title>After Detailer</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=After_Detailer&amp;diff=239"/>
		<updated>2023-09-17T05:27:50Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: /* How to get started */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is adetailer? ==&lt;br /&gt;
&#039;&#039;&#039;After Detailer&#039;&#039;&#039; (also known as &#039;&#039;&#039;adetailer&#039;&#039;&#039;) is an extension for &#039;&#039;&#039;enhancing image details&#039;&#039;&#039;. Most noteably, particular parts of the body.  Since we spend a lot of time in our lives looking at the face, it is an area that needs particular attention when generating an image because the viewer is able to pick up on the smallest flaws.  No additional downloads are required post-initial installation. The extension features specialized models in three key categories: &#039;&#039;&#039;Face&#039;&#039;&#039;, &#039;&#039;&#039;Hand&#039;&#039;&#039;, and &#039;&#039;&#039;Person&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
The extension will automatically detect the body part based on the model that is selected, and then improve that area.&lt;br /&gt;
&lt;br /&gt;
== How to get started ==&lt;br /&gt;
If you&#039;d like to get started with adetailer, you&#039;ll have to install the extension first.  If you&#039;d like more information on how to in&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Grasping the Nuances of Detection and Mask Parameters ==&lt;br /&gt;
Don&#039;t sweat it if you&#039;re new to aDetailer! Though the defaults are pretty good, understanding what each setting does can be a game-changer:&lt;br /&gt;
&lt;br /&gt;
=== Detection Model Confidence Threshold ===&lt;br /&gt;
This is your go-to setting for deciding the minimum confidence level the model needs to flag something. Looking to capture more faces? Go ahead and lower this threshold (say, to around 0.3). Tweak this to get more or fewer detections as you see fit.&lt;br /&gt;
&lt;br /&gt;
=== Mask Min/Max Area Ratio ===&lt;br /&gt;
Ever annoyed by tiny, irrelevant objects getting picked up? Adjusting the minimum area ratio can help you weed those out. This setting basically tells the model what size range is cool for masks.&lt;br /&gt;
&lt;br /&gt;
== Diving into Inpainting Settings ==&lt;br /&gt;
When it comes to inpainting, &amp;quot;Inpaint denoising strength&amp;quot; is your MVP. It controls how much denoising happens during the inpainting process. Tweak it until you like what you see.&lt;br /&gt;
&lt;br /&gt;
In most scenarios, you&#039;ll probably want to stick with &amp;quot;Inpaint only masked&amp;quot; if you&#039;re inpainting faces. It&#039;s generally the way to go.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Models:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; are both models from the &amp;lt;code&amp;gt;adetailer&amp;lt;/code&amp;gt; repository on Hugging Face1. Both models are used for 2D realistic face detection. The main difference between the two is that &amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; is more accurate in detecting faces than &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt;. Regarding their performance,&amp;lt;code&amp;gt;face_yolov8s.pt&amp;lt;/code&amp;gt; has a higher mean average precision (mAP) of 0.713 at an intersection over union (IoU) threshold of 0.50 and 0.404 at an IoU threshold of 0.50-0.95, while &amp;lt;code&amp;gt;face_yolov8n.pt&amp;lt;/code&amp;gt; has a mAP of 0.660 at an IoU threshold of 0.50 and 0.366 at an IoU threshold of 0.50-0.951.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Lora&amp;diff=238</id>
		<title>Lora</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Lora&amp;diff=238"/>
		<updated>2023-09-17T05:27:10Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:ChickenSandwhich by 3303.jpg|thumb|Image generated using stable diffusion. By User 3303]]&lt;br /&gt;
&lt;br /&gt;
== LoRA in the Context of Stable Diffusion: Adapting Large Language Models Efficiently ==&lt;br /&gt;
&lt;br /&gt;
LoRA, or &#039;&#039;&#039;Low-Rank Adaptation&#039;&#039;&#039;, is a significant paradigm shift in the field of &#039;&#039;&#039;AI&#039;&#039;&#039;, designed to adapt large-scale pretrained language models for specific tasks or domains. This technology becomes crucial in the development and enhancement of &#039;&#039;&#039;AI Stable Diffusion&#039;&#039;&#039;, a sophisticated &#039;&#039;&#039;AI generator&#039;&#039;&#039; for text-to-image transformations. &lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&#039;&#039;&#039;What is Stable Diffusion?&#039;&#039;&#039; &lt;br /&gt;
Stable Diffusion is an &#039;&#039;&#039;AI Stable Diffusion model&#039;&#039;&#039; designed for generating images from text prompts. It uses advanced &#039;&#039;&#039;AI&#039;&#039;&#039; technologies to create &#039;&#039;&#039;Stable Diffusion art&#039;&#039;&#039;. Its development is often discussed on platforms like &#039;&#039;&#039;Reddit Stable Diffusion&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;How to Use Stable Diffusion&#039;&#039;&#039;&lt;br /&gt;
To use Stable Diffusion, users can download the &#039;&#039;&#039;Stable Diffusion download&#039;&#039;&#039; package and follow the &#039;&#039;&#039;Stable Diffusion install&#039;&#039;&#039; instructions. A &#039;&#039;&#039;[[Automatic1111|WebUI Stable Diffusion]]&#039;&#039;&#039; is also available for a more user-friendly experience. &lt;br /&gt;
&lt;br /&gt;
=== LoRA and Stable Diffusion ===&lt;br /&gt;
[[File:Lora graphic.png|left|thumb|Credit: arXiv:2106.09685v2 [cs.CL] 16 Oct 2021]]&lt;br /&gt;
&lt;br /&gt;
==== The Concept of LoRA ====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;LoRA&#039;&#039;&#039; is designed to adapt large language models like GPT-3 by freezing the pretrained model weights and injecting trainable rank decomposition matrices into each layer of the Transformer architecture. This significantly reduces the number of trainable parameters, making it feasible for applications like &#039;&#039;&#039;Stable Diffusion&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== Benefits ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;GPU Memory Efficiency&#039;&#039;&#039;: LoRA can reduce the GPU memory requirement by 3 times compared to full fine-tuning.&lt;br /&gt;
* &#039;&#039;&#039;Parameter Efficiency&#039;&#039;&#039;: The number of trainable parameters can be reduced by up to 10,000 times.&lt;br /&gt;
* &#039;&#039;&#039;Quality&#039;&#039;&#039;: LoRA performs on-par or better than fine-tuning in terms of model quality.&lt;br /&gt;
* &#039;&#039;&#039;Automatic1111&#039;&#039;&#039;: LoRA facilitates automatic rank-deficiency adjustments, contributing to its efficiency.&lt;br /&gt;
&lt;br /&gt;
==== LoRA Stable Diffusion ====&lt;br /&gt;
In the context of &#039;&#039;&#039;Stable Diffusion&#039;&#039;&#039;, LoRA provides an efficient way to adapt the &#039;&#039;&#039;AI generator&#039;&#039;&#039; for specific &#039;&#039;&#039;Stable Diffusion prompts&#039;&#039;&#039; or even &#039;&#039;&#039;Stable Diffusion image to image&#039;&#039;&#039; transformations. This makes LoRA a vital component in &#039;&#039;&#039;Midjourney&#039;&#039;&#039; developments related to &#039;&#039;&#039;Stable Diffusion models&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
=== Practical Implementation ===&lt;br /&gt;
&lt;br /&gt;
For those looking to implement LoRA in their projects, a package has been released that facilitates the integration of LoRA with PyTorch models. It is particularly beneficial for &#039;&#039;&#039;Stable Diffusion AI art&#039;&#039;&#039; projects or any other &#039;&#039;&#039;Stable Diffusion AI generator&#039;&#039;&#039; that requires efficient language model adaptation.&lt;br /&gt;
&lt;br /&gt;
=== Free Stable Diffusion and LoRA ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Free Stable Diffusion&#039;&#039;&#039; options can greatly benefit from LoRA, as it enables developers to utilize powerful language models without overwhelming hardware requirements.&lt;br /&gt;
&lt;br /&gt;
=== Conclusion ===&lt;br /&gt;
&lt;br /&gt;
LoRA offers a promising approach for adapting large language models efficiently, and its application in &#039;&#039;&#039;Stable Diffusion&#039;&#039;&#039; technologies makes it an indispensable tool for advanced &#039;&#039;&#039;AI art&#039;&#039;&#039; and &#039;&#039;&#039;AI generator&#039;&#039;&#039; systems. Whether you&#039;re a developer interested in &#039;&#039;&#039;Stable Diffusion models&#039;&#039;&#039; or an end-user exploring &#039;&#039;&#039;Stable Diffusion prompts&#039;&#039;&#039;, LoRA provides an efficient and high-quality option for you.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Automatic1111&amp;diff=237</id>
		<title>Automatic1111</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Automatic1111&amp;diff=237"/>
		<updated>2023-09-17T05:26:29Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Automatic1111 gui.png|thumb|610x610px]]&lt;br /&gt;
= Automatic1111 =&lt;br /&gt;
If Stable Diffusion is the engine that makes it all possible, Automatic1111 is the graphic user interface (GUI) that makes it easy to use.  &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a web-based application that allows you to generate images using the &#039;&#039;&#039;Stable Diffusion&#039;&#039;&#039; algorithm. Though when SD was originally created, a few GUI&#039;s had surfaced, Automatic1111 quickly rose to the top and has become the most widely used interface for SD image generation.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is one of the most popular Stable Diffusion GUIs. It offers a wide range of features, including:&lt;br /&gt;
&lt;br /&gt;
=== Features ===&lt;br /&gt;
* Tons of [[Extensions]]&lt;br /&gt;
* [[Lora]]&amp;lt;nowiki/&amp;gt;s help drastically improve the style a quality with low impact on processing.&lt;br /&gt;
* Support for multiple models&lt;br /&gt;
* [[PNG Info]]&lt;br /&gt;
* The ability to use text prompts&lt;br /&gt;
* The ability to save and load images&lt;br /&gt;
* The ability to export images in various formats&lt;br /&gt;
* A built-in image viewer&lt;br /&gt;
* A community forum&lt;br /&gt;
&#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a powerful tool that can be used to create stunning images. It is easy to use and has a wide range of features. If you are interested in generating images using Stable Diffusion, &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a great option.&lt;br /&gt;
= How to install &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
==Prerequisites==&lt;br /&gt;
&lt;br /&gt;
* Computer with a recent version of Python installed.&lt;br /&gt;
* Graphics card with at least 6GB of VRAM.&lt;br /&gt;
&lt;br /&gt;
==Instructions==&lt;br /&gt;
&lt;br /&gt;
# Download the Stable Diffusion WebUI from GitHub: [https://github.com/AUTOMATIC1111/stable-diffusion-webui]&lt;br /&gt;
# Extract the downloaded file to a convenient location.&lt;br /&gt;
# Open a terminal window and navigate to the extracted folder.&lt;br /&gt;
# Run the following command to install the necessary dependencies:&lt;br /&gt;
  &amp;lt;pre&amp;gt;&lt;br /&gt;
  pip install -r requirements.txt&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
# Run the following command to start the Stable Diffusion WebUI:&lt;br /&gt;
  &amp;lt;pre&amp;gt;&lt;br /&gt;
  ./webui.sh&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Stable Diffusion WebUI will open in your web browser. You can then start generating images!&lt;br /&gt;
&lt;br /&gt;
==Additional Notes==&lt;br /&gt;
&lt;br /&gt;
* The Stable Diffusion WebUI is still under development, so you may encounter some bugs.&lt;br /&gt;
* If you are having trouble installing or running the Stable Diffusion WebUI, you can ask for help on the Stable Diffusion Discord server: [https://discord.gg/sebastiankamph]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Extensions&amp;diff=236</id>
		<title>Extensions</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Extensions&amp;diff=236"/>
		<updated>2023-09-17T05:19:13Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Once you get started on Automatic1111, you&#039;re going to want to start customizing it to your liking. This is done through extensions.&lt;br /&gt;
&lt;br /&gt;
= How to Install Extensions on Automatic 1111 =&lt;br /&gt;
[[File:A1111 Extensions Install.png|680x680px|Extensions Tab|right]]&lt;br /&gt;
# Open the Stable Diffusion WebUI.&lt;br /&gt;
# Click on the &amp;quot;Extensions&amp;quot; tab.&lt;br /&gt;
# Click on the &amp;quot;Install&amp;quot; button.&lt;br /&gt;
# Enter the URL of the extension you want to install.&lt;br /&gt;
# Click on the &amp;quot;Install&amp;quot; button again.&lt;br /&gt;
&lt;br /&gt;
The extension will be installed in the &amp;lt;code&amp;gt;extensions&amp;lt;/code&amp;gt; directory of the Stable Diffusion WebUI. You can then enable the extension by clicking on the checkbox next to its name.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Additional Considerations ==&lt;br /&gt;
* You can also install extensions manually by copying the extension directory into the &amp;lt;code&amp;gt;extensions&amp;lt;/code&amp;gt; directory of the Stable Diffusion WebUI.&lt;br /&gt;
* If you are installing an extension from GitHub, you can use the following command:&lt;br /&gt;
&amp;lt;source&amp;gt;&lt;br /&gt;
git clone https://github.com/[username]/[extension_name].git&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;br /&gt;
* Once you have installed an extension, you may need to restart the Stable Diffusion WebUI for the extension to take effect.&lt;br /&gt;
&lt;br /&gt;
== Popular Extensions ==&lt;br /&gt;
* &#039;&#039;&#039;[[After Detailer]]:&#039;&#039;&#039; This extension adds additional detail to the generated images. (You &#039;&#039;&#039;DON&#039;T&#039;&#039;&#039; need to download any model from huggingface.)&lt;br /&gt;
* [[ControlNet]]&#039;&#039;&#039;:&#039;&#039;&#039; This extension allows you to control the generated images using a neural network.&lt;br /&gt;
* &#039;&#039;&#039;Model Preset Manager:&#039;&#039;&#039; This extension allows you to easily switch between different models.&lt;br /&gt;
* &#039;&#039;&#039;System Information:&#039;&#039;&#039; This extension displays information about your system, such as the CPU and GPU usage.&lt;br /&gt;
* &#039;&#039;&#039;3D Pose:&#039;&#039;&#039; This extension allows you to add a 3D pose to the generated images.&lt;br /&gt;
* &#039;&#039;&#039;Aspect Ratio Helper:&#039;&#039;&#039; This extension helps you to maintain the aspect ratio of the generated images.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Automatic1111&amp;diff=235</id>
		<title>Automatic1111</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Automatic1111&amp;diff=235"/>
		<updated>2023-09-17T04:57:29Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: /* Instructions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Automatic1111 gui.png|thumb|610x610px]]&lt;br /&gt;
= Automatic1111 =&lt;br /&gt;
If Stable Diffusion is the engine that makes it all possible, Automatic1111 is the graphic user interface (GUI) that makes it easy to use.  &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a web-based application that allows you to generate images using the &#039;&#039;&#039;Stable Diffusion&#039;&#039;&#039; algorithm. Though when SD was originally created, a few GUI&#039;s had surfaced, Automatic1111 quickly rose to the top and has become the most widely used interface for SD image generation.   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is one of the most popular Stable Diffusion GUIs. It offers a wide range of features, including:&lt;br /&gt;
&lt;br /&gt;
=== Features ===&lt;br /&gt;
* Tons of [[Extensions]]&lt;br /&gt;
* Support for multiple models&lt;br /&gt;
* The ability to use text prompts&lt;br /&gt;
* The ability to save and load images&lt;br /&gt;
* The ability to export images in various formats&lt;br /&gt;
* A built-in image viewer&lt;br /&gt;
* A community forum&lt;br /&gt;
&#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a powerful tool that can be used to create stunning images. It is easy to use and has a wide range of features. If you are interested in generating images using Stable Diffusion, &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; is a great option.&lt;br /&gt;
= How to install &#039;&#039;&#039;Automatic1111&#039;&#039;&#039; =&lt;br /&gt;
&lt;br /&gt;
==Prerequisites==&lt;br /&gt;
&lt;br /&gt;
* Computer with a recent version of Python installed.&lt;br /&gt;
* Graphics card with at least 6GB of VRAM.&lt;br /&gt;
&lt;br /&gt;
==Instructions==&lt;br /&gt;
&lt;br /&gt;
# Download the Stable Diffusion WebUI from GitHub: [https://github.com/AUTOMATIC1111/stable-diffusion-webui]&lt;br /&gt;
# Extract the downloaded file to a convenient location.&lt;br /&gt;
# Open a terminal window (Command Prompt for Windows) and navigate to the extracted folder (&amp;quot; cd Your\file\location\here&amp;quot;)&lt;br /&gt;
# Run the following command to install the necessary dependencies:&lt;br /&gt;
  &amp;lt;pre&amp;gt;&lt;br /&gt;
  pip install -r requirements.txt&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
# Run the following command to start the Stable Diffusion WebUI:&lt;br /&gt;
  &amp;lt;pre&amp;gt;&lt;br /&gt;
  ./webui.sh&lt;br /&gt;
  &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Stable Diffusion WebUI will open in your web browser. You can then start generating images!&lt;br /&gt;
&lt;br /&gt;
==Additional Notes==&lt;br /&gt;
&lt;br /&gt;
* The Stable Diffusion WebUI is still under development, so you may encounter some bugs.&lt;br /&gt;
* If you are having trouble installing or running the Stable Diffusion WebUI, you can ask for help on the Stable Diffusion Discord server: [https://discord.gg/sebastiankamph]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=MediaWiki:Sidebar&amp;diff=234</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=MediaWiki:Sidebar&amp;diff=234"/>
		<updated>2023-09-17T04:54:37Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: Created page with &amp;quot; * navigation ** mainpage|mainpage-description ** Automatic1111|Automatic1111 ** recentchanges-url|recentchanges ** randompage-url|randompage ** helppage|help-mediawiki * SEARCH * TOOLBOX * LANGUAGES&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
* navigation&lt;br /&gt;
** mainpage|mainpage-description&lt;br /&gt;
** Automatic1111|Automatic1111&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help-mediawiki&lt;br /&gt;
* SEARCH&lt;br /&gt;
* TOOLBOX&lt;br /&gt;
* LANGUAGES&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=PNG_Info&amp;diff=233</id>
		<title>PNG Info</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=PNG_Info&amp;diff=233"/>
		<updated>2023-09-17T04:51:13Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What The PNG Info Tab? ==&lt;br /&gt;
The PNG Info tab is a tab on the [[Automatic1111]] webui that allows the user to upload images previously generated by stable diffusion and get meta data on the image to recreate it.  This is excellent if you want to go back and work on an image you&#039;ve already created.&lt;br /&gt;
&lt;br /&gt;
[[File:PNG Info.png]]&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=File:PNG_Info.png&amp;diff=232</id>
		<title>File:PNG Info.png</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=File:PNG_Info.png&amp;diff=232"/>
		<updated>2023-09-17T04:49:29Z</updated>

		<summary type="html">&lt;p&gt;StableTiger3: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PNG Info Tab&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
</feed>