跳转到内容

类似项目

开源视频扩散Transformer 与 #Sora 相似,我们的

Latte 模型也具有 * 视频扩散Transformer* 架构,并对 LDM + DiT 的设计空间进行了深入研究,以用于视频生成 项目: https://maxin-cn.github.io/latte_project/ 产品编号: https://github.com/Vchitect/Latte

Latte: Latent Diffusion Transformer for Video Generation

Xin Ma(1,2‡), Yaohui Wang(2†), Gengyun Jia(3), Xinyuan Chen(2), Ziwei Liu(4) Yuan-Fang Li(1), Cunjian Chen(1), Yu Qiao(2),

(1)Department of Data Science & AI, Faculty of Information Technology, Monash University (2)Shanghai Artificial Intelligence Laboratory (3)Nanjing University of Posts and Telecommunications (4)S-Lab, Nanyang Technological University

(‡)Work done during internship at Shanghai AI Laboratory (†)Corresponding author

Paper arXiv Code

Abstract

We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.