Skip to content

Command line

✅ As DepthFlow is a ShaderFlow "spin-off" - a custom Scene - most of its documentation on commands, behavior, issues and options are shared between the two.

  • The examples of each section shows a single functionality, but you can combine them.

Simplest command

Start a realtime window with the default image and animation with:

depthflow main
  • Walk around with W A S D or Left click + Drag
  • Drag and drop image files or URLs to load them
  • Press Tab for a dev menu with some options

Using your images

Load an input image, and start the main event loop with:

# Local images, either relative or absolute paths
depthflow input -i ./image.png main
# Remote images, like URLs
depthflow input -i https://w.wallhaven.cc/full/2y/wallhaven-2y6wwg.jpg main
  • Note: Make sure the image path exists, relative paths (not starting with C:\ or /) are relative to where the the executable or current working directory of the shell or is.

Exporting a video

Render 5 seconds of the animation to a video file with default settings with:

depthflow main -o ./output.mp4

See all rendering options with depthflow main --help

Resolution

The output resolution will match the input image by default. You can pass either --width/-w or --height/-h to force one component and fit the other based on the image's aspect ratio:

# Renders a 2560x1440 (Quad HD) video
depthflow input -i ./image16x9.png main -h 1440
# Width is prioritized, this renders a 500x500 video
depthflow input -i ./image1000x1000.png main -w 500 -h 800

Looping

The output video will scale and loop perfectly, with a period set by the --time parameter:

# 5 second video with 1 loop happening
depthflow main -o ./output.mp4 --time 5
# 12 second video with 3 loops happening
depthflow main -o ./output.mp4 --time 4 --loops 3

Video encoder

You can also easily change the video encoder:

You can see all available codecs in depthflow --help !

# Configure the H264 codec
depthflow h264 --preset veryfast main -o ./output.mp4
# Use the H264 codec with NVENC on a NVIDIA GPU
depthflow h264-nvenc main -o ./output.mp4
# Only supported in RTX 4000 and newer GPUs
depthflow av1-nvenc main -o ./output.mp4

Quality

The video is eternal, so getting the best render quality even if it takes longer is important. There's a couple of main factors that defines the final video quality:

  1. Resolution: A combination of the input image and the exported video's resolution. Rendering at a higher resolution than the input image will not improve quality.

  2. Super Sampling Anti Aliasing: Renders at a higher internal resolution and then downscaling to the output target mitigates edge artifacts and smooths them. The default is 1.2, good quality with 2, best with 4, don't go above it. Uses N^2 times more GPU power.

  3. Quality parameter: The depthflow main --quality 50 parameter defines how accurate the projection's intersections are. A value of 0 is sufficient for subtle movements, and will create "layers" artifacts at higher offsets. The default is 50, which is actually overkill for most cases.

  4. Depth map: Defines the accuracy of the parallax effect. The default estimator is a state of the art balance of speed, portability, quality, and should be enough.

  5. Video codec: The encoder compresses the video from unimaginably impractical sizes of raw data to something manageable. Briefly, CPU encoders yields the best compression, file sizes, and quality, but are slower than GPU encoders, which are "worse" in every other aspect. Max quality is seen only on the realtime window, as it is the raw data itself.

# The stuff explained above in a command:
depthflow main --quality 80 --ssaa 2 -o ./output.mp4
# Extremely slow, but the best quality
depthflow main --quality 100 --ssaa 4 -o ./output.mp4

Using an upscaler

Upscale the input image before rendering the video with:

# Use Upscayl to upscale the image (https://github.com/upscayl/upscayl)
depthflow upscayl -m digital-art input -i ./image.png main -o ./output.mp4
# Use Waifu2x to upscale the image (https://github.com/nihui/waifu2x-ncnn-vulkan)
depthflow waifu2x input -i ./image.png main -o ./output.mp4

Custom animations

🚧 Animations are work in progress, and will change substantially 🚧

You can use a couple of high quality presets with:

See any of depthflow 'preset' --help for more options!

# Add a horizontal motion to the camera
depthflow horizontal main
# Add a vertical motion to the camera
depthflow vertical --linear main
# Add a circular motion to the camera
depthflow circle --intensity 0.3 main
# Add a dolly zoom to the camera
depthflow dolly --reverse -i 2 main
# Add a zoom-in motion to the camera
depthflow zoom main

Batch processing

⚠️ Note: Batch exporting feature is experimental and might have issues!

Selecting inputs

Multiple direct inputs

# Local paths
depthflow input -i ./image1.png -i ./image2.png (...)

# Or even URLs
depthflow input -i https://.. -i https://.. (...)
# All file contents of a folder
depthflow input -i ./images (...)
# Glob pattern matching
depthflow input -i ./images/*.png (...)

Exporting

Let's assume there are foo.png, bar.png, and baz.png in the ./images folder:

  1. Always have -b all or --batch all in the main command (or a range like 0-5 images)
  2. The output video basename will become a suffix of the exported video
# This creates 'foo-suffix.mp4', 'bar-suffix.mp4', 'baz-suffix.mp4' in the './outputs' folder
depthflow input -i ./images main -b all -o ./outputs/suffix

The prefix is enforced mainly as there's no 'empty' file in a directory, but also useful in:

# Create many different animations of the same image
depthflow input -i ./images orbital main -b all -o ./outputs/orbital
depthflow input -i ./images circle  main -b all -o ./outputs/circle

Or even set the output folder to the same input, so videos sorts nicely alongside images:

depthflow input -i ./images main -b all -o ./images

It might be a good idea to specify a common height for all exports:

# Ensures all videos are '1080p', at least in the height
depthflow input -i ./images main -b all -o ./images -h 1080p

Loving what you see? Help me continue this Full-Time Open Source Work!