I want to train LoRA for Vace version of WAN2.2.
My main task is make stable video outpainting.
So, I have dataset data
cropped.mp4 - cropped video with grey border.
mask.mp4 - video, where every frame is mask
orig.mp4 - original video to compare output
description.txt - text file with positive prompt
Dataset arch:
- <root_dir>
-- <video_guid>
---- cropped.mp4
---- mask.mp4
---- orig.mp4
---- description.txt
-- <video_guid>
---- cropped.mp4
---- mask.mp4
---- orig.mp4
---- description.txt
How can I train LoRA for Vace layers with full dataset?
Model for ex.: https://huggingface.co/linoyts/Wan2.2-VACE-Fun-14B-diffusers