SciFi Morph VFX in ComfyUI & Stable Diffusion

SciFi Morph VFX in ComfyUI & Stable Diffusion

Trying ComfyUI with Stable Diffusion, to generate this Guided Animation of the robotic hand, then adding a procedural Glitch FX in Natron to create a cool sci-fi morph transition. This is WIP

0

Created on 16th November 2024

SciFi Morph VFX in ComfyUI & Stable Diffusion

SciFi Morph VFX in ComfyUI & Stable Diffusion

Trying ComfyUI with Stable Diffusion, to generate this Guided Animation of the robotic hand, then adding a procedural Glitch FX in Natron to create a cool sci-fi morph transition. This is WIP

The problem SciFi Morph VFX in ComfyUI & Stable Diffusion solves

In my opinion it can be very useful to generate rough ideas/pre-visualisation for approaching any complex VFX shot at present.

Challenges I ran into

I created a simple workflow in ComfyUI using a SD1.5 based checkpoint and LoRA, IPAdapter for image reference, Depth Control Net, YoloV8 for image segmentation, AnimateLCM for the animation and Upscaled using 4x UltraSharp with Ultimate SD Upscale.
Then, I used the generated depth map to create a procedural morph FX animation in Natron and composited the raw footage with the A.I. output.

The task was computationally expensive and required a high end GPU to run as it was a 1080p 24 fps video and I was further upscaling it to 2k. I initially tried to train with my GTX 1080Ti but it would have taken a week, or more than that to complete the whole sequence (it was only good for single images), so after I tested withsome single frames, I fixed the seed that I liked and rented an A5000 from runpod to generate the remaining frames and upscaling.

Discussion

Builders also viewed

See more projects on Devfolio