As a CG animator, I have been testing this software for months. I think the A1111 has a lot of potential as an AI tool for animation and video projects, but it still needs a lot of improvement. The extensions and the main package do not work well together, and they lack a clear method. I would call it an “Unstable Diffusion” for now.
Thanks, it was clear and efficient it works very fine for me. It could be very nice to get a good explanation about the video source and controlnet.
very informative tutorial. thanks.
Thank YOU! Amazing! Really helpful with examples and easy to understand instructions! Really appreciate this!
Love it ❤ Very well explained!
im having a hard time having the loras activate, they dont generate movement like zoomin or our or panning
The question i have is how do you do image to video, because Svd is too much for my system, and when i try to put in a real life photo into IP adapter, the Picture becomes all dark and so on
your script sounds very CHATGPT like lol
Hello can you please make a video on how to run automatic1111 stable diffusion while connected to google colab. I am on a Macbook Air M2 with 8gb ram and I keep getting memory errors when trying to use Reactor face swap. If you can put out a tutorial on how to use stable diffusion using google colab GPU that would be very helpful
Its possible with inpaint?
It doesn't work for me :/
So basically Animatediff is good for two seconds of animation, because everytime I go beyond that mark things get trippy and incoherent. Is there any way to keep coherence for longer than those two seconds?
I've never been able to get animateDiff to work...its always hits a CUDa error halfway through the calculation.
@NextDiffusion