AI Developer Stable Diffusion Launches Gen 2 That Generates Video From Text


 Runway, the company that develops artificial intelligence (AI) Stable Diffusion, also announced Gen-2, which is an AI that produces videos based on only text input. It is an improvement over Gen-1 which required users to input their own images that would be used as the basis for creating videos.



This Generative AI can also produce images at a higher resolution and video generation accuracy than previous versions. There are several modes Gen-2 can generate video. The first mode is text input. Then text and image input can also be used with the input image being the basis of the video produced. The third mode is Stylization which imitates the movement style from a video input before another video with text input.



Stylization

The fourth mode is Storyboard which transforms object videos into other videos based on text input. It's the same as Stylization but without humans in the generated video. Next is the Mask mode which can direct the Gen-2 to change only certain objects in the video used as a base.



Render

Then there is the Render mode which transforms the pre-vis image used in the animation into the final animation complete with textures. Lastly is the Customization mode that uses all of the above modes to produce the final video.


Runway Gen-2 is still under development and therefore only accessible to a limited number of users by invitation.

Previous Post Next Post

Contact Form