Relatively easy ‘video diffusion’ AI is here, and working on relatively low-end PCs. Framepack is fully open source and can be run locally. Feed it a source image and a simple instructional prompt, and generate up to a minute of video at 30FPS. With reasonable speed, and without needing a stupid amount of VRAM. Impressive. It’s from lllyasviel, the guy who made the SD Controlnets, Fooocus and WebUI Forge.
Animated PNG demo, 4Mb.
There is now a one-click Windows installer for Windows 10 (though I guess it might be hacked to use CUDA 11.x + an earlier compatible Pytorch, for Windows 7 users). Note that, once installed, it then fetches 40+Gb of models and controlnets etc. (Models are here for separate download). So you’re likely to need 50Gb+ of space for a local install. There’s no standalone .torrent at present.
It can do widescreen as well as square and phone-screen format. Appears to be limited to about 640px in generation size, so upscaling would be needed. No pristine 4k footage, then. Since it works from any pre-made image, the potential for animating Poser and DAZ renders seems obvious. No whining about ‘piracy’ either, that way.
And the fact that it’s free and can speedily generate somewhat lengthy clips means it has potential for generating lots of possible clips and then stitching a YouTube movie together from the best (though of course there’s no lip-sync).