If you want to play with this, as in really play, with over a dozen variant models with acceleration loras and a vibrant community, ya gotta check out:
On the other side, is there any projects focusing on performance instead? I have the VRAM available to run Wan2.1, but still takes minutes per frame. Basically something like what vLLM is for running local LLM weights, but for video/WAN?
There are a lot of people focused on performance, various methods, just as there are a lot of people focused on non-performance issues like fine tunes that add aspects the models lack, such as terminology linking professional media terms to the model, the pop culture terminology the model does not know, accuracy of body posture during fight, dance, gymnastic, and sports activity, and then less flashy but pragmatic actions like proper use of tableware, chopsticks, keyboards and musical instruments - complex actions that stand out when done incorrectly or never shown. The model knowledge is high but has limits, which people are adding.
There is also a ton of Wan video activity in the ComfyUI community. Everyday for a while, about two weeks ago, ComfyUI had updates specific to Wan 2.2 video integrations in the standard installation. ComfyUI is more complex application, significantly, than Wan2GP though.
That doesn't stop Mac / iPhone from using these models. I've build videos with Wan 2.2 on my wife's M4 Mac Mini w/ 24GB of RAM. It might take a little longer to render though ;)
I wish they'd state suggested or required hardware upfront.
Also disappointing that I haven't seen anything target the new Ryzen AI chips that can do 96gb since they seem pretty capable. I'm not sure how much memory m4 pro on the apple side can be utilized for this but it seems like the typical machines are 48 or 64gb these days. Lot more bang for your buck than an Nvidia card on paper?
Well, they sort of do: they keep referring to the 4090, on their Github and primary promotional pages (https://wan.video/).
But really all the various video models really want an 80+ gig vram card, to run comfortably. The contortions the ComfyUI community goes through to get things running at a reasonable speed on the current, dinky-sized vram consumer cards, are impressive.
Arguably most interesting facts about the new Wan 2.2 model:
- they are now using a 27B MoE architecture (with two 14B experts, for low level and high level detail), which were usually only used for autoregressive LLMs rather than diffusion models
- the smaller 5B model supports up to 720p24 video and runs on 24 GB of VRAM, e.g. an RTX 4090, a consumer graphics card
- if their benchmarks are reliable, the model performance is SOTA even compared to closed source models
- The 27B "MoE" are not the MoE commonly referred to in LLM world. It is not MoE on FFN layers. It simply means two different models used for different denoising timestep ranges (exactly the same as SDXL-Base / SDXL-Refiner). Calling it MoE is not technically wrong. But claiming "which were usually only used for autoregressive LLMs rather than diffusion models" is just wrong (not to mention HiDream I1 is a model actually incorporated MoE layers (in FFN layer) and is a diffusion model).
- The A14B models can run on 24GiB VRAM too, with CPU offloading and quantization.
- Yes, it is SotA even including some closed source models.
Are there video generation benchmarks similar to how there are benchmarks for LLMs? Reason I ask is because with lots of these models you have to go through a long cycle to get them up and running before you see an output, and often they will break with basic tasks requiring physics, state, etc. Would love to see some comparison of models across basic things like that.
https://github.com/deepbeepmeep/Wan2GP
And the discord community: https://discord.gg/g7efUW9jGV
"Wan2GP" is AI video and images "for the GPU poor", get all this operating with as little as 6GB VRAM, Nvidia only.