<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://stablediffusionwiki.com/index.php?action=history&amp;feed=atom&amp;title=Stable_Diffusion_Video</id>
	<title>Stable Diffusion Video - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://stablediffusionwiki.com/index.php?action=history&amp;feed=atom&amp;title=Stable_Diffusion_Video"/>
	<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_Video&amp;action=history"/>
	<updated>2026-05-04T14:43:59Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.4</generator>
	<entry>
		<id>http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_Video&amp;diff=268&amp;oldid=prev</id>
		<title>StableTiger3: Created page with &quot;Date: November 21 2023  In a groundbreaking development, a new latent video diffusion model known as &quot;Stable Video Diffusion&quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.  The Stable Video Diffusion model represents a pivotal advancement, a...&quot;</title>
		<link rel="alternate" type="text/html" href="http://stablediffusionwiki.com/index.php?title=Stable_Diffusion_Video&amp;diff=268&amp;oldid=prev"/>
		<updated>2024-01-05T03:44:42Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Date: November 21 2023  In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.  The Stable Video Diffusion model represents a pivotal advancement, a...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Date: November 21 2023&lt;br /&gt;
&lt;br /&gt;
In a groundbreaking development, a new latent video diffusion model known as &amp;quot;Stable Video Diffusion&amp;quot; has been introduced, setting a new benchmark in high-resolution text-to-video and image-to-video generation. This innovative model marks a significant leap in the realm of video synthesis, leveraging the strengths of latent diffusion models previously used for 2D image creation.&lt;br /&gt;
&lt;br /&gt;
The Stable Video Diffusion model represents a pivotal advancement, as it integrates temporal layers into existing models, fine-tuned on select high-quality video datasets. This approach addresses the challenges faced by the industry, where a variety of training methods have resulted in a lack of consensus on a standardized strategy for video data curation.&lt;br /&gt;
&lt;br /&gt;
[[File:StableVideoDiffusion.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The paper detailing this breakthrough highlights three crucial stages for the successful training of video Latent Diffusion Models (LDMs): text-to-image pretraining, video pretraining, and high-quality video finetuning. These stages collectively enhance the model&amp;#039;s ability to generate more accurate and detailed videos from textual or image inputs.&lt;br /&gt;
&lt;br /&gt;
The introduction of Stable Video Diffusion promises a transformative impact on video content creation, offering unparalleled capabilities in generating high-quality videos from simple text or image inputs. This development is not just a step but a giant leap forward in the field of video synthesis and artificial intelligence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The full details of this innovative model can be found in the recently published paper, which delves into the intricate mechanics and training methodologies of Stable Video Diffusion.&lt;br /&gt;
&lt;br /&gt;
Stay tuned for further updates on this revolutionary technology that is set to redefine the boundaries of video generation.&lt;/div&gt;</summary>
		<author><name>StableTiger3</name></author>
	</entry>
</feed>