FancyVideo是一个由360AI团队和中山大学联合开发并开源的视频生成模型。
FancyVideo的创新之处在于它能够实现帧特定的文本指导,使得生成的视频既动态又具有一致性。
FancyVideo模型通过精心设计的跨帧文本引导模块(Cross-frame Textual Guidance Module, CTGM)改进了现有的文本控制机制,以解决现有文本到视频(T2V)模型在生成具有连贯运动视频时面临的挑战。
CTGM包含三个子模块:时间信息注入器(Temporal Information Injector, TII)、时间亲和力细化器(Temporal Affinity Refiner, TAR)和时间特征增强器(Temporal Feature Booster, TFB),分别在交叉注意的开始、中间和结束时实现帧特定文本指导。
FancyVideo在EvalCrafter基准测试上取得了最先进的T2V生成结果,并能够合成动态和一致的视频。
github项目地址:https://github.com/360CVGroup/FancyVideo。
一、环境安装
1、python环境
建议安装python版本在3.10以上。
2、pip库安装
pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 torchaudio==2.1.2 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
3、fancyvideo模型下载:
git lfs install
git clone https://huggingface.co/qihoo360/FancyVideo
4、stable-diffusion-v1-5模型下载:
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
二、功能测试
1、运行测试:
(1)python代码调用测试
import os
import argparse
import torch
import yaml
from skimage import img_as_ubyte
from fancyvideo.pipelines.fancyvideo_infer_pipeline import InferPipelinedef load_config(config_path):with open(config_path, "r") as fp:return yaml.safe_load(fp)def load_prompts(prompt_path):with open(prompt_path, "r") as fp:return [line.strip() for line in fp.readlines()]def check_and_create_folder(folder_path):if not os.path.exists(folder_path):os.makedirs(folder_path, exist_ok=True)@torch.no_grad()
def process_prompt(infer_pipeline, prompt, reference_image_path, seed, video_length, resolution, use_noise_scheduler_snr, cond_fps, cond_motion_score, output_fps, dst_path):print(f"Processing prompt: {prompt}")reference_image, video, _ = infer_pipeline.t2v_process_one_prompt(prompt=prompt,reference_image_path=reference_image_path,seed=seed,video_length=video_length,resolution=resolution,use_noise_scheduler_snr=use_noise_scheduler_snr,fps=cond_fps,motion_score=cond_motion_score)frame_list = [img_as_ubyte(frame.cpu().permute(1, 2, 0).float().detach().numpy()) for frame in video]infer_pipeline.save_video(frame_list=frame_list, fps=output_fps, dst_path=dst_path)print(f"Saved video to: {dst_path}\n")@torch.no_grad()
def main(args):# Load configurationsconfig = load_config(args.config)model_config = config.get("model", {})infer_config = config.get("inference", {})# Initialize inference pipelineinfer_pipeline = InferPipeline(text_to_video_mm_path=model_config.get("text_to_video_mm_path"),base_model_path=model_config.get("base_model_path"),res_adapter_type=model_config.get("res_adapter_type"),trained_keys=model_config.get("trained_keys"),model_path=model_config.get("model_path"),vae_type=model_config.get("vae_type"),use_fps_embedding=model_config.get("use_fps_embedding"),use_motion_embedding=model_config.get("use_motion_embedding"),common_positive_prompt=model_config.get("common_positive_prompt"),common_negative_prompt=model_config.get("common_negative_prompt"),)# Prepare inference parametersinfer_mode = infer_config.get("infer_mode")resolution = infer_config.get("resolution")video_length = infer_config.get("video_length")output_fps = infer_config.get("output_fps")cond_fps = infer_config.get("cond_fps")cond_motion_score = infer_config.get("cond_motion_score")use_noise_scheduler_snr = infer_config.get("use_noise_scheduler_snr")seed = infer_config.get("seed")prompt_path = infer_config.get("prompt_path")reference_image_folder = infer_config.get("reference_image_folder")output_folder = infer_config.get("output_folder")check_and_create_folder(output_folder)# Load promptsprompts = load_prompts(prompt_path)# Process each promptfor i, prompt in enumerate(prompts):reference_image_path = f"{reference_image_folder}/{i}.png" if infer_mode == "i2v" else ""dst_path = f"{output_folder}/example_{i}.mp4"process_prompt(infer_pipeline, prompt, reference_image_path, seed, video_length, resolution, use_noise_scheduler_snr, cond_fps, cond_motion_score, output_fps, dst_path)if __name__ == "__main__":parser = argparse.ArgumentParser()parser.add_argument("--config", type=str, default="configs/inference/i2v.yaml", help="Path to the configuration file")args = parser.parse_args()main(args)
未完......
更多详细的欢迎关注:杰哥新技术