Showing posts with label FFmpeg. Show all posts
Showing posts with label FFmpeg. Show all posts

FFmpeg: RGB affected luma cycle

Utilising the 'extractplanes', 'blend', and 'xfade' filters found in FFmpeg 4.2.2. In the examples shown here, the following script and arguments were run (in order):
$ rgbLumaBlendCycle_FFmpeg.sh 'image1.jpg' 'screen'
$ rgbLumaBlendCycle_FFmpeg.sh 'image2.jpg' 'difference'
$ rgbLumaBlendCycle_FFmpeg.sh 'image3.jpg' 'pinlight' 'REV'
#!/usr/bin/env bash
# FFmpeg ver. 4.2.2+

# RGB affected luma cycle: Each colour plane is extracted and blended with the
# original image to adjust overall image brightness. The result of each blend
# is faded into the next, before belnding back to the original image.

# Parameters:
# $1 : Filename
# $2 : Blend type (e.g. average, screen, difference, pinlight, etc.)
# $3 : Reverse blend order (any string to enable)

# version: 2020.07.15_12.28.31
# source: https://oioiiooixiii.blogspot.com

function main()
{
   local mode="$2"
   local name="$1"
   local layerArr=('[a][colour1]' '[b][colour2]' '[c][colour3]'
                   '[colour1][a]' '[colour2][b]' '[colour3][c]')
   local layerIndex="${3:+3}" && layerIndex="${layerIndex:-0}"
   # Array contains values for both blend orders; index is offset if $3 is set

   ffmpeg \
      -i "$name" \
      -filter_complex "
         format=rgba,loop=loop=24:size=1:start=0,
            split=8 [rL][gL][bL][colour1][colour2][colour3][o][o1];
         [rL]extractplanes=r,format=rgba[a];
         [gL]extractplanes=g,format=rgba[b];
         [bL]extractplanes=b,format=rgba[c];
         ${layerArr[layerIndex++]}blend=all_mode=${mode}[a];
         ${layerArr[layerIndex++]}blend=all_mode=${mode}[b];
         ${layerArr[layerIndex]}blend=all_mode=${mode}[c];
         [o][a]xfade=transition=fade:duration=0.50:offset=0,format=rgba[a];
         [a][b]xfade=transition=fade:duration=0.50:offset=0.51,format=rgba[b];
         [b][c]xfade=transition=fade:duration=0.50:offset=1.02,format=rgba[c];
         [c][o1]xfade=transition=fade:duration=0.50:offset=1.53,format=rgba
          " \
      ${name}-${mode}.mkv
}

main "$@"
download: rgbLumaBlendCycle_FFmpeg.sh


Image Credits:

Henry Huey - "Alice in Wonderland - MAD Productions 5Sep2018 hhj_6811"
Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)
https://www.flickr.com/photos/henry_huey/43768692515/

Henry Huey - "Alice in Wonderland - MAD Productions 5Sep2018 hhj_6869"
Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)
https://www.flickr.com/photos/henry_huey/29740096427/

Henry Huey - "Alice in Wonderland - MAD Productions 5Sep2018 hhj_6848"
Attribution-NonCommercial 2.0 Generic (CC BY-NC 2.0)
https://www.flickr.com/photos/henry_huey/29740096117/

FFmpeg: Improved 'Rainbow-Trail' effect





* Includes sound πŸ”Š

I have updated the script for this FFmpeg 'rainbow' effect I created in 2017¹ as there were numerous flaws, errors, and inadequacies in that earlier version. One major issue was the inability to colorkey with any colour other than black; this has been resolved.

This time, the effect is based on the 'extractplanes' filter and the alpha levels created after using a 'colorkey' filter. This produces a much more refined result; better colour shaping, and maintains most of the original foreground subject. This 'extractplanes' filter can even be removed from the filtergraph, to create an alternative, more subtle effect.

Degrading jpeg images with repeated rotation - via Bash (FFmpeg and ImageMagick)



A continuation of the decade-old topic of degrading jpeg images by repeated rotation and saving. This post briefly demonstrates the process using FFmpeg and ImageMagick in a Bash script. Previously, a Python script achieving similar results was published, which has recently been updated. There are many posts on this subject and they can all be accessed by searching for 'jpeg rotation' tag.
posts: https://oioiiooixiii.blogspot.com/search/label/jpeg%20rotation
The two basic commands are below. Both versions rotate an image 90 degrees clockwise, and each overwrite the original image. They should be run inside a loop to create progressively more degraded images.

ImageMagick: The quicker of the two, it uses the standard 'libjpg' library for saving images.
mogrify -rotate "90" -quality "74" "image.jpg"

FFmpeg: Saving is done with the 'mjpeg' encoder, creating significantly different results.
ffmpeg -i "image.jpg" -vf "transpose=1" -q:v 12 "image.jpg" -y

There are many options and ways to extend each of the basic commands. For FFmpeg, one such way is to use the 'noise' filter to help create entropy in the image while running. It also has the effect of discouraging the gradual magenta-shift caused by the mjpeg encoder.

A functional (but basic) Bash script is presented later in this blog post. It allows for the choice between ImageMagick or FFmpeg versions, as well as allowing some other parameters to be set. Directly below is another montage of images created using the script. Run-time parameters for each result are given at the end of this post.



Running the script without any arguments (except for the image file name) will invoke ImageMagick's 'mogrify' command, rotating the image 500 times, and saving at a jpeg quality of '74'. Note that when the FFmpeg version is running, the jpeg quality is crudely inverted, to use the 'q:v' value of the 'mjpeg' encoder.

The parameters for the script: [filename: string] [rotations: 1-n] [quality: 1-100] [frames: (any string)] [version: (any string for FFmpeg)] [noise: 1-100]
#!/bin/bash
# Simple Bash script to degrade a jpeg image by repeated rotations and saves,
# using either FFmpeg or ImageMagick. N.B. Starting image must be a jpeg.

# Example: rotateDegrade.sh "image.jpg" "1200" "67" "no" "FFmpeg" "21"
# Run on image.jpg, 1200 rotations, quality=67, no frames, use FFmpeg, noise=21

# source: oioiiooixiii.blogspot.com
# version: 2019.08.22_13.57.37

# All relevent code resides in this function
function rotateDegrade()
{
   local rotations="${2:-500}" # number of rotations
   local quality="${3:-74}" # Jpeg save quality (note inverse value for FFmpeg)
   local saveInterim="${4:-no}" # To save every full rotation as a new frame
   local version="${5:-IM}" # Choice of function (any other string for FFmpeg)
   local ffNoise="${6:-0}" # FFmpeg noise filter

   # Name of new file created to work on
   local workingFile="${1}_r${rotations}-q${quality}-${version}-n${ffNoise}.jpg"
   cp "$1" "$workingFile" # make a copy of the input file to work on
   # N.B. consider moving above file to volatile memory e.g. /dev/shm

   # ImageMagick and FFmpeg sub-functions
   function rotateImageMagick() {
      mogrify -rotate "90" -quality "$quality" "$workingFile"; }
   function rotateFFmpeg() {
      ffmpeg -i "$workingFile" -vf "format=rgb24,transpose=1,
         noise=alls=${ffNoise}:allf=u,format=rgb24" -q:v "$((100-quality))"\
         "$workingFile" -y -loglevel panic &>/dev/null; }

   # Main loop for repeated rotations and saves
   for (( i=0;i<"$rotations";i++ ))
   {
      # Save each full rotation as a new frame (if enabled)
      [[ "$saveInterim" != "no" ]] && [[ "$(( 10#$i%4 ))" -lt 1  ]] \
      && cp "$workingFile" "$(printf %07d $((i/4)))_$workingFile"

      # Rotate by 90 degrees and save, using whichever function chosen
      [[ "$version" == "IM" ]] \
      && rotateImageMagick \
      || rotateFFmpeg

      # Display progress
      displayRotation "$i" "$rotations"
   }
}

# Simple textual feedback of progress shown in terminal
function displayRotation() { clear;
   case "$(( 10#$1%4 ))" in
   3) printf "Total: $2 / Processing: $1 πŸ‘„  ";;
   2) printf "Total: $2 / Processing: $1 πŸ‘‡  ";;
   1) printf "Total: $2 / Processing: $1 πŸ‘†  ";;
   0) printf "Total: $2 / Processing: $1 πŸ‘…  ";;
   esac
}

# Driver function
function main { rotateDegrade "$@"; echo; }; main "$@"
download: rotateDegrade.sh

python version: https://oioiiooixiii.blogspot.com/2014/08/jpeg-destruction-via-repeated-rotate.html
original image: https://www.flickr.com/photos/flowizm/19148678846/ (CC BY-NC-SA 2.0)

parameters for top image, left to right:
original | rotations=300,quality=52,version=IM | rotations=200,quality=91,version=FFmpeg,noise=7

parameters for bottom image, left to right:
rotations=208,quality=91,version=FFmpeg,noise=7 | rotations=300,quality=52,version=FFmpeg,noise=0 | rotations=500,quality=74,version=IM | rotations=1000,quality=94,version=FFmpeg,noise=7 | rotations=300,quality=94,version=FFmpeg,noise=16

FFmpeg: CRT Screen Effect


A simple attempt at creating a [stylised] 'CRT screen' effect with FFmpeg. Loaded with the common CRT effect tropes and clichΓ©s; interlaced lines, noise, chromatic aberration, bloom etc.

The filterchains were constructed to be modular; allowing them to be included or removed, as desired. The ideas included in these filterchains might be of more use in general, than the whole effect itself.

#!/bin/bash

# A collection of FFmpeg filterchains which can be used to create a stylised
# 'CRT screen' effect on given input.
#
# The filter-chains have been split apart to increase modularity at the cost of
# sacrificing simplicity and increasing redundant code. Filter-chains can be
# added or removed in various orders, but special attention must be paid to
# selecting the correct termination syntax for each stage.
#
# Includes basic demonstration FFmpeg command which takes "$1" input file.
#
# Version: 2019.04.06_02.49.13
# Source https://oioiiooixiii.blogspot.com

### FILTERCHAINS #############################################################

# Reduce input to 25% PAL resolution
shrink144="scale=-2:144"

# Crop to 4:3 aspect ratio at 25% PAL resolution
crop43="crop=180:144"

# Create RGB chromatic aberration
rgbFX="split=3[red][green][blue];
      [red] lutrgb=g=0:b=0,
            scale=188x144,
            crop=180:144 [red];
      [green] lutrgb=r=0:b=0,
              scale=184x144,
              crop=180:144 [green];
      [blue] lutrgb=r=0:g=0,
             scale=180x144,
             crop=180:144 [blue];
      [red][blue] blend=all_mode='addition' [rb];
      [rb][green] blend=all_mode='addition',
                  format=gbrp"

# Create YUV chromatic aberration
yuvFX="split=3[y][u][v];
      [y] lutyuv=u=0:v=0,
          scale=192x144,
          crop=180:144 [y];
      [u] lutyuv=v=0:y=0,
          scale=188x144,
          crop=180:144 [u];
      [v] lutyuv=u=0:y=0,
          scale=180x144,
          crop=180:144 [v];
      [y][v] blend=all_mode='lighten' [yv];
      [yv][u] blend=all_mode='lighten'"

# Create edge contour effect
edgeFX="edgedetect=mode=colormix:high=0"

# Add noise to each frame of input
noiseFX="noise=c0s=7:allf=t"

# Add interlaced fields effect to input
interlaceFX="split[a][b];
             [a] curves=darker [a];
             [a][b] blend=all_expr='if(eq(0,mod(Y,2)),A,B)':shortest=1"

# Re-scale input to full PAL resolution
scale2PAL="scale=720:576"

# Re-scale input to full PAL resolution with linear pixel
scale2PALpix="scale=720:576:flags=neighbor"

# Add magnetic damage effect to input [crt screen]
screenGauss="[base];
             nullsrc=size=720x576,
                drawtext=
                   fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:
                   text='@':
                   x=600:
                   y=30:
                   fontsize=170:
                   fontcolor=red@1.0,
             boxblur=80 [gauss];
             [gauss][base] blend=all_mode=screen:shortest=1"

# Add reflections to input [crt screen]
reflections="[base];
             nullsrc=size=720x576,
             format=gbrp,
             drawtext=
               fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:
               text='€':
               x=50:
               y=50:
               fontsize=150:
               fontcolor=white,
             drawtext=
               fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:
               text='J':
               x=600:
               y=460:
               fontsize=120:
               fontcolor=white,
             boxblur=25 [lights];
             [lights][base] blend=all_mode=screen:shortest=1"

# Add more detailed highlight to input [crt screen]
highlight="[base];
             nullsrc=size=720x576,
             format=gbrp,
             drawtext=
               fontfile=/usr/share/fonts/truetype/freefont/FreeSerif.ttf:
               text='¡':
               x=80:
               y=60:
               fontsize=90:
               fontcolor=white,
             boxblur=7 [lights];
             [lights][base] blend=all_mode=screen:shortest=1"

# Curve input to mimic curve of crt screen
curveImage="vignette,
            format=gbrp,
            lenscorrection=k1=0.2:k2=0.2"

# Add bloom effect to input [crt screen]
bloomEffect="split [a][b];
             [b] boxblur=26,
                    format=gbrp [b];
             [b][a] blend=all_mode=screen:shortest=1"

### FFMPEG COMMAND ###########################################################

ffmpeg \
   -i "$1" \
   -vf "
         ${shrink144},
         ${crop43},
         ${rgbFX},
         ${yuvFX},
         ${noiseFX},
         ${interlaceFX},
         ${scale2PAL}
         ${screenGauss}
         ${reflections}
         ${highlight},
         ${curveImage},
         ${bloomEffect}
      " \
   "${1}__crtTV.mkv"

exit 0
download script: ffmpeg_CRT-effect.sh

A bank of 'screens' displaying different inputs.



Some alternate choices of filterchains.



source video: https://www.youtube.com/watch?v=8SPUHGRXQUY

FFmpeg: FAPA (Frame-Averaged Pixel Array)


Preamble: When I create a blog-post about a film, I will often include a cryptic looking pixelated image somewhere in the body of the post. When possible, I will create one of these images for every film I watch. I create them as a type of 'fingerprint', showing overall tonality and temporal dynamics of the film's visuals.

The image contains all frames in a given film. Each pixel represents the average colour of its particular frame. This colour is calculated by doing no more than scaling the frame to dimensions of '1x1' in a FFmpeg 'scale' filter. The frames [pixels] are then tiled into a single image of suitable dimensions.

The example video is taken from 'Summer in February (2013)' and shows a scene involving tropospheric lightening near the end of the film. The section of the 'pixel array' image relating to this scene has been highlighted and magnified. The contrast in lighting between frames means each frame can be clearly discerned as the video plays, even without the aid of the arrow.



The Bash script outputs basic information before and while processing. The process will take a reasonable length of time to finish. The version here uses two instances of FFmpeg to process the video. This is so progress feedback is displayed during execution. A simple single instance alternative is included in the 'Notes' section of the script, as well as ideas for showing progress while using this version. The script has not been updated since its initial creation and can probably be improved upon.

#!/bin/bash
################################################################################
# Create a 'Frame-Averaged Pixel Array' of a given video. Works by reducing
# each frame to a single pixel, and appending all frames into single image.
# - Takes: $1=Filename [$2=width]
# - Requires: ffmpeg + ffprobe
#   ver. 1.1 - 10th November, 2015
# source: https://oioiiooixiii.blogspot.com
###############################################################################

width="${2:-640}" # If no width given, set as 640
duration="$(ffprobe "$1" 2>&1 \
            | grep Duration \
            | awk  '{ print $2 }')"
seconds="$(echo $duration \
           | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }' \
           | cut -d '.' -f 1)"
fps="$(ffprobe "$1" 2>&1 \
       | sed -n 's/.*, \(.*\) fps,.*/\1/p' \
       | awk '{printf("%d\n",$1 + 0.5)}')"
frames="$(( seconds*fps ))"
height="$(( frames/width ))"
filters="tile=${width}x${height}"

clear
printf "$(pwd)/$1
___Duration: ${duration::-1}
____Seconds: $seconds
________FPS: $fps
_____Frames: $frames
_____Height: $height
____Filters: $filters\n"

# First instance of FFmpeg traverses the frames, the second concatenates them.
ffmpeg \
   -y \
   -i "$1" \
   -vf "scale=1:1" \
   -c:v png \
   -f image2pipe pipe:1 \
   -loglevel quiet \
   -stats \
| ffmpeg \
    -y \
    -i pipe:0 \
    -vf "$filters" \
    -loglevel quiet \
    "${1%.*}_$width".png

################################ NOTES #######################################

# Single line solution, but doesn't show progress
# ffmpeg -i "$1" -frames 1 -vf "$filters" "${1%.*}".png -y
# filters="scale=1:1,tile=${width}x${height}" # Used with single line version
# View ingest progress using: pv "$1" | piped to ffmpeg
download: video2pixarray.sh

[Note: I have struggled with giving a name to this process since I created the script, and have left it as the first thing I thought of. Perhaps others whom have creating something similar have better names for it.]

film review: https://oioiiooixiii.blogspot.com/2017/11/summer-in-february-2013.html

Brutal Doom: Stablised




Zandronum window stablised using default 'VidStab' stablisation settings via FFmpeg.

related: https://oioiiooixiii.blogspot.com/2016/09/ffmpeg-video-stabilisation-using.html
more info: https://doomwiki.org/wiki/Project_Brutality

FFmpeg: Colour animation from macroblock motion-vectors



The animation is created from styling the macroblock motion vectors, as displayed by FFmpeg, rather than by manipulating the actual video content. The blocks of colour are created by stacking 'dilation' filters on the motion-vector layer. Before being dilated, the colouring of the arrows is extracted from the original video by 'colorkey' overlay. Based on an earlier filtergraph experiments.¹

#!/bin/bash 
# Generate stylised animation from video macroblock motion vectors, 
# and present in a side-by-side comparison with original video. 
# version: 2018.03.28.21.08.16 
# source: https://oioiiooixiii.blogspot.com 

cropSize="640:ih:480:0" # Adjust area and dimensions of interest

ffplay \
   -flags2 +export_mvs \
   -i "$1" \
   -vf \
      "
         split [original][vectors];
         [vectors] codecview=mv=pf+bf+bb,
                   crop=$cropSize [vectors];
         [original] crop=$cropSize,
                    split=3 [original][original1][original2];
         [vectors][original2] blend=all_mode=difference128,
                              eq=contrast=7:brightness=-0.3,
                              split [vectors][vectors1];
         [vectors1] colorkey=0xFFFFFF:0.9:0.2 [vectors1];
         [original1][vectors1] overlay,
                               smartblur,
                               dilation,dilation,dilation,dilation,dilation,
                               eq=contrast=1.4:brightness=-0.09 [pixels];
         [vectors][original][pixels] hstack=inputs=3
      "



¹ see also: https://oioiiooixiii.blogspot.com/2016/09/ffmpeg-create-video-composite-of.html
source video: γ‚Šγ‚Šγ‚ (LILIA)https://www.youtube.com/watch?v=U1DFzSlNkV8 (used without permission) m(_ _)m

FFmpeg: Temporal slice-stacking ['slit-scan'] effect (aka 'Wobbly Video')


An old video effect¹, experimented with in 2013 (using processing.org)², revisited now again using FFmpeg³. The concept is to take one line of pixels (x or y) of a frame, relative to its position in the video, and stack those lines into a new frame. Then increment their starting point while progressing through the timeline of the video.

This is somewhat similar to the effect commonly seen (these days) with the "rolling shutter" artefact of certain digital photography. See 'Ancillary footage' at the bottom of the post for an overlay version that may help as visual aid in understanding.

In the demonstration above (and longer videos below) the frame is divided into four quadrants: Top-left is the original; top-right are horizontal stacked frames (btt); bottom-left are vertical stacked frames (rtl); bottom-right are vertical-stacked frames that have then been stacked horizontally.

#!/bin/bash
# Temporal slice-stacking effect with FFmpeg (aka 'wibbly-wobbly' video).
# See 'NOTES' at bottom of script.

# Ver. 2017.10.01.22.14.08
# source: http://oioiiooixiii.blogspot.com

function cleanUp() # tidy files after script termination
{
   rm -rf "$folder" \
   && echo "### Removed temporary files and folder '$folder' ###"
}
trap cleanUp EXIT

### Variables
folder="$(mktemp -d)" # create temp work folder
duration="$(ffprobe "$1" 2>&1 | grep Duration | awk  '{ print $2 }')"
seconds="$(echo $duration \
         | awk -F: '{ print ($1 * 3600) + ($2 * 60) + $3 }' \
         | cut -d '.' -f 1)"
fps="$(ffprobe "$1" 2>&1 \
      | sed -n 's/.*, \(.*\) fps,.*/\1/p' \
      | awk '{printf("%d\n",$1 + 0.5)}')"
frames="$(( seconds*fps ))"
width="640" # CHANGE AS NEEDED (e.g. width/2 etc.)
height="360" # CHANGE AS NEEDED (e.g. height/2 etc.)

### Filterchains
stemStart="select=gte(n\,"
stemEnd="),format=yuv444p,split[horz][vert]"
horz="[horz]crop=in_w:1:0:n,tile=1x${height}[horz]"
vert="[vert]crop=1:in_h:n:0,tile=${width}X1[vert]"
merge="[0:v]null[horz];[1:v]null[vert]"
scale="scale=${width}:${height}"

#### Create resized video, or let 'inputVideo=$1'
clear; echo "### RESIZING VIDEO (location: $folder) ###"
inputVideo="$folder/resized.mkv"
ffmpeg -loglevel debug -i "$1" -vf "$scale" -crf 10 "$inputVideo" 2>&1 \
|& grep 'frame=' | tr \\n \\r; echo

### MAIN LOOP
for (( i=0;i<"$frames";i++ ))
do
   echo -ne "### Processing Frame: $i of $frames  ### \033[0K\r" 
   ffmpeg \
   -loglevel panic \
      -i "$inputVideo" \
      -filter_complex "${stemStart}${i}${stemEnd};${horz};${vert}" \
      -map '[horz]' \
         -vframes 1 \
         "$folder"/horz_frame${i}.png \
      -map '[vert]' \
         -vframes 1 \
         "$folder"/vert_frame${i}.png
done

### Join images (optional sharpening, upscale, etc. via 'merge' variable)
echo -ne "\n### Creating output videos ###"
ffmpeg \
   -loglevel panic \
   -r "$fps" \
   -i "$folder"/horz_frame%d.png \
   -r "$fps" \
   -i "$folder"/vert_frame%d.png \
   -filter_complex "$merge" \
   -map '[horz]' \
      -r "$fps" \
      -crf 10 \
      "${1}_horizontal-smear.mkv" \
   -map '[vert]' \
      -r "$fps" \
      -crf 10 \
      "${1}_verticle-smear.mkv"

### Finish and tidy files 
exit

### NOTES ######################################################################

# The input video is resized to reduce frames needed to fill frame dimensions 
# (which can produce more interesting results). 
# This is done by producing a separate video, but it can be included at the 
# start of 'stemStart' filterchain to resize frame dimensions on-the-fly. 
# Adjust 'width' and 'height' for alternate effects.

# For seamless looping, an alternative file should be created by looping
# the desired section of video, but set the number of processing frames to 
# original video's 'time*fps' number. The extra frames are only needed to fill 
# the void [black] area in frames beyond loop points.

download: ffmpeg_wobble-video.sh

FFmpeg: Rainbow trail chromakey effect



UPDATE 2020-07-20: A new version has been published with significant improvements!
https://oioiiooixiii.blogspot.com/2020/07/ffmpeg-improved-rainbow-trail-effect.html

An effect loosely inspired by old Scanimate¹ analogue video effects. The process involves stacking progressively delayed, and colourised, instances of the input video on top of each other. These overlays are blended based on a chosen colourkey, or chromakey. The colour values and number of repetitions can be easily changed, though with higher numbers [in test cases, 40+], buffer underflows may be experienced.

#!/bin/bash

# Generate ['Scanimate' inspired] rainbow trail video effect with FFmpeg
# (N.B. Resource intensive - consider multiple passes for longer trails) 
# version: 2017.08.08.13.47.31
# source: http://oioiiooixiii.blogspot.com

function rainbowFilter() #1:delay 2:keytype 3:color 4:sim val 5:blend 6:loop num
{
   local delay="PTS+${1:-0.1}/TB" # Set delay between video instances
   local keyType="${2:-colorkey}" # Select between 'colorkey' and 'chromakey'
   local key="0x${3:-000000}"     # 'key colour
   local chromaSim="${4:-0.1}"    # 'key similarity level
   local chromaBlend="${5:-0.4}"  # 'key blending level
   local colourReset="colorchannelmixer=2:2:2:2:0:0:0:0:0:0:0:0:0:0:0:0
                     ,smartblur"
   # Reset colour after each colour change (stops colours heading to black)
   # 'smartblur' to soften edges caused by setting colours to white

   # Array of rainbow colours. Ideally, this could be generated algorithmically
   local colours=(
      "2:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0" "0.5:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0"
      "0:0:0:0:0:0:0:0:2:0:0:0:0:0:0:0" "0:0:0:0:2:0:0:0:0:0:0:0:0:0:0:0"
      "2:0:0:0:2:0:0:0:0:0:0:0:0:0:0:0" "2:0:0:0:0.5:0:0:0:0:0:0:0:0:0:0:0"
      "2:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0"
   )

   # Generate body of filtergraph (default: 7 loops. Also, colour choice mod 7)
   for (( i=0;i<${6:-7};i++ ))
   {
      local filter=" $filter
                     [a]$colourReset,
                        colorchannelmixer=${colours[$((i%7))]},
                        setpts=$delay,
                        split[a][c];
                     [b]colorkey=${key}:${chromaSim}:${chromaBlend}[b];
                     [c][b]overlay[b];"
   }
   printf "split [a][b];${filter}[a][b]overlay"
}

ffmpeg -i "$1" -vf "$(rainbowFilter)" -c:v huffyuv "${1}_rainbow.avi"
download: ffmpeg_rainbow-trail.sh

This is a top-down approach to building the effect. Another [possibly better] solution is to build the layers from the bottom up (pre-calculate the PTS delay for each layer i.e. "layer number x PTS delay"). This might improve the fidelity of the top layer in certain videos. Another idea is split input into three instances rather than two, and 'key overlay the third at the very end of the filter.


A concatenation of all videos generated during testing and development.

¹ Scanimate video synthesizer: http://scanimate.com/
original video: https://www.youtube.com/watch?v=god7hAPv8f0

FFmpeg: Extract section of video using MPV screen-shots



An unorthodox, but simple and effective way of accurately extracting a section of video from a larger video file, using MPV screen-shots (with specific file naming scheme) for 'in' and 'out' points.

Bash commands below serve only to demonstrate the general idea. No error handling whatsoever.
#!/bin/bash
# Extract section of video using time-codes taken from MPV screen-shots
# Requires specific MPV screen-shot naming scheme: screenshot-template="%f__%P"
# N.B. Skeleton script demonstrating basic operation

filename="$(ls -1 *.jpg | head -1)"
startTime="$(cut -d. -f-2 <<< "${filename#*__}")"
filename="${filename%__*}"
endTime="$(cut -d_ -f3 <<<"$(ls -1 *.jpg | tail -1)" | cut -d. -f-2)"
ffmpeg \
   -i "$filename" \
   -ss "$startTime" \
   -to "$endTime" \
   "EDIT__${filename}__${startTime}-${endTime}.${filename#*.}"
Another approach to this (and perhaps more sensible) is to script it all through MPV itself. However, that ties the technique down to MPV, whereas, this 'screen-shot' idea allows it to be used with other media players offering timestamps in the filename. Also, it's a little more tangible: you can create a series of screen-shots and later decide which ones are timed better.

video shown in demo: “The Magic of Ballet With Alan and Monica Loughman" DVD (2005)

UPDATE: September 11, 2018



I recently happened upon a MPV lua script created for practical video extraction.


It really works well, and I now find myself using it every time I need to clip a section of video.
link: https://github.com/ekisu/mpv-webm

FFmpeg: Simple video editor with Zenity front-end


A proof-of-concept implementation of a simple, but extensible, video editor, based on FFmpeg with a Zenity interface. Presented as a proof-of-concept as bugs still remain and the project is abandoned.

The goal was to create a video editor with the simplest of requirements; tools found on most popular GNU/Linux distributions, with a standard installation of FFmpeg (+FFplay). The original idea was to create just a video clipping tool, however, the facility to add functionality was included. Extending the functionality involved creating separate scripts with Zenity dialogs relating to the feature added.

While the effectiveness of the implementation is questionable, some interesting concepts remain, such as: a scrub-bar for FFplay created from just a set of filterchains, a novel approach to referencing time-stamps from FFplay, and the correct FFmpeg switches to force edit video clips on non-keyframes (the results of which are demonstrated in the video above).

download: ZenityVideoEditor_0.1.tar.gz

original version: 2016, 15th February https://twitter.com/oioiiooixiii/status/699239047806500864

FFmpeg: 144 (16x9) grid of random Limmy Vine videos


It would have been nice to complete this all in one FFmpeg command (building such a command is a relatively trivial 'for loop' affair¹) but the level of file IO made this impossible (for my setup at least). Perhaps with smaller file sizes and fewer videos, it would be less impractical.

# Some basic Bash/FFmpeg notes on the procedures involved: 

# Select random 144 videos from current folder ('sort -R' or 'shuf')
find ./ -name "*.mp4" | sort -R | head -n 144

# Generate 144 '-i' input text for FFmpeg (files being Bash function parameters)
echo '-i "${'{1..144}'}"'
# Or use 'eval' for run-time creation of FFmpeg command
eval "ffmpeg $(echo '-i "${'{1..144}'}"')"

# VIDEO - 10 separate FFmpeg instances

# Create 9 rows of 16 videos with 'hstack', then use these as input for 'vstack'
[0:v][1:v]...[15:v]hstack=16[row1];
[row1][row2]...[row9]vstack=9
# [n:v] Input sources can be omitted from stack filters if all '-i' files used

# AUDIO - 1 FFmpeg instance

# Mix 144 audio tracks into one output (truncate with ':duration=first' option)
amix=inputs=144

# If needed, normalise audio volume in two passes - first analyse audio
-af "volumedetect"
# Then increase volume based on 'max' value, such that 0dB not exceeded 
-af "volume=27dB"

# Mux video and audio into one file
ffmpeg -i video.file -i audio.file -map 0:0 -map 1:0 out.file

# Addendum: Some other thoughts in reflection: Perhaps piping the files to a FFmpeg instance with a 'grid' filter might simplify things, or loading the files, one by one, inside the filtergraph via 'movie=' might be worth investigating. 
¹ See related: https://oioiiooixiii.blogspot.com/2017/01/ffmpeg-generate-image-of-tiled-results.html
context: https://en.wikipedia.org/wiki/Limmy
source videos: https://vine.co/Limmy

FFmpeg: Predator [1987 movie] "Adaptive Camouflage" chromakey effect


A simple Bash script invoking FFmpeg to create a similar "cloaking" effect as seen the 1987 film "Predator"¹. It needs a little bit more work to make it more accurate; perhaps adjusting curves or levels for each iteration, to make them more defined, etc.

#!/bin/bash

# Create Predator [1987 movie] "Adaptive Camo" chromakey effect in FFmpeg
# - Takes arguments: filename, colour hex value (defaults to green).
# ver. 2017.06.25.16.29.43
# source: http://oioiiooixiii.blogspot.com

function setDimensionValues() # Sets global size variables based on file source 
{
   dimensions="$(\
      ffprobe \
      -v error \
      -show_entries stream=width,height \
      -of default=noprint_wrappers=1 \
      "$1"\
   )"
      
   # Create "$height" and "$width" var vals
   eval "$(head -1 <<<"$dimensions");$(tail -1 <<<"$dimensions")"
}

function buildFilter() # Builds filter using core filterchain inside for-loop
{
   # Set video dimensions and key colour
   setDimensionValues "$1"
   colour="0x${2:-00FF00}"
   oWidth="$width"
   oHeight="$height"
   
   # Arbitary scaling values - adjust to preference
   for ((i=0;i<4;i++))
   {
      width="$((width-100))"
      height="$((height-50))"
      printf "split[a][b];
            [a]chromakey=$colour:0.3:0.06[keyed];
            [b]scale=$width:$height:force_original_aspect_ratio=decrease,
               pad=$oWidth:$oHeight:$((width/4)):$((height/4))[b];
            [b][keyed]overlay,"
   }
   printf "null" # Deals with hanging , character in filtergraph
}

# Generate output
ffplay -i "$1" -vf "$(buildFilter "$@")"
#ffmpeg -i "$1" -vf "$(buildFilter "$@")" -an "${1}_predator-fx.mkv"
video source: https://www.youtube.com/watch?v=7UdhuPnWpHA
¹ film: https://en.wikipedia.org/wiki/Predator_(film)
context: https://twitter.com/oioiiooixiii/status/868527906682789889
context: https://twitter.com/oioiiooixiii_/status/868614704394055680

Long-exposure photography compared to image-stacking video frames (ImageMagick/FFmpeg)



Pictured above: comparisons of images made from a segment on "Good Mythical Morning" involving "light painting". In the top-left, a 30-second exposure from a still-camera in the studio. Below it, an image made using ImageMagick's '-evaluate-sequence' function, on all frames taken from the 30 seconds of video. In this case, the 'max' setting was used, which stacks maximum pixel values. In the top-right, a single frame from the video, and below it, 100-frames stacked with FFmpeg using sequential 'tblend' filters.

# ImageMagick - Use with extracted frames or FFmpeg image pipe (limited to 4GB)
 convert -limit memory 4GB frames/*.png -evaluate-sequence max merged-frames.png

# FFmpeg - Chain of tblend filters (N.B. inefficient - better ways to do this)
ffmpeg -i video.mp4 -vf tblend=all_mode=lighten,tblend=all_mode=lighten,... 
As a comparison, here is an image made from the same frames but using 'mean' average with ImageMagick.



A video demo for the FFmpeg version


source video: https://www.youtube.com/watch?v=1tdKZYT4YLY&t=2m4s

FFmpeg: Generate an image of tiled results from all 'blend' filter types



Two rudimentary bash scripts, which take two file paths as input for an FFmpeg instance, which in turn uses these files as sources for 'blend' filters. Image dimensions are irrelevant as both input images are scaled to '320x320' (for ease of formatting). Multiple for-loops generate the bulk of the filtergraph. There are more elegant ways of doing this, using multiple outputs and secondary applications, but these solutions here are based on a single instance of FFmpeg.

Note the "$format" variable in the scripts. Different pixel formats will produce different results, which is part of the reason why these scripts do not produce "all possible blend results".

There are two versions: one outputs "all_mode" only results [image above], and the other outputs the results of all blend modes. The "all_mode" only version is probably the more useful of the two. Even if neither script is used, the image examples included here could be useful as references, as they give a general idea of the effect of each 'blend' type.

script: ffmpeg-tiled-blend-results-all_mode-only.sh
script: ffmpeg-tiled-blend-results.sh

The scripts were written to aid with choosing the most suitable blend filter for tasks/projects, which can sometimes be hard to comprehend beforehand. The images produced allow for quick a assessment of possibilities between two test frames.



Input A source: https://www.flickr.com/photos/38983646@N06/15545146285/
Input B source: https://www.flickr.com/photos/archer10/4244381931/

FFmpeg: Extract foreground [moving] objects from video


This is a somewhat crude implementation, but given the right source material, an acceptable result can be generated. It is based on FFmpeg's 'maskedmerge' filter, which takes three input streams: a background, an overlay, and a mask (which is used to manipulate the pixels of the overlay layer).

ffmpeg \
   -i background.png \
   -i video.mkv \
   -filter_complex \
   "
      color=#00ff00:size=1280x720 [matte];
      [1:0] format=rgb24, split[mask][video];
      [0:0][mask] blend=all_mode=difference, 
         curves=m='0/0 .1/0 .2/1 1/1', 
         format=gray,
         smartblur=1,
         eq=brightness=30:contrast=3, 
         eq=brightness=50:contrast=2, 
         eq=brightness=-10:contrast=50,
         smartblur=3,
         format=rgb24 [mask];
      [matte][video][mask] maskedmerge,format=rgb24
   " \
   -shortest \
   -pix_fmt yuv422p \
   result.mkv

For this process, a still background image is needed. An extracted frame from the video will do, or if the background is constantly obscured, it may be necessary to manually create a clean image from multiple frames (stacking multiple frames may produce better results too).

The background image is 'difference' blended with the video, to produce the mask which will be used with the 'maskedmerge' filter. This video stream is then converted to grayscale and adjusted to maximise the contrast levels. [N.B. The video format changes multiple times with different filter effects, and so 'format=rgb24' is set in each filterchain for colour compatibility.]

The curves and equilisation filtering is a bit hard to explain, and due to to lack of a real time preview, somewhat "hit and miss". Basically, a 'threshold' filter is being built, where just black and white areas are created. The eq/curve filters here progressively squeeze the tones together in such a way that only the wanted areas are solid white. This will change for each project, and the shown filter chain has been progressive "hacked together" for this specific video.[N.B. 'maskedmerge' interprets tonality as levels of pixel opacity in the overlay layer]



The first 'smartblur' filter fills out (dilates) the areas to create more solid structures in the mask. The second 'smartblur' filter blends the edges of the mask to create a softer cutout. Additional 'smartblur' filters can be used on the background and on the video stream it is blended with, which will act as a noise filter to cull stray momentary differences.

The final element is a new background for the extracted elements to sit upon. In this example, a simple green matte is generated. This, along with the created mask, and original video, are are provided as input for the 'maskedmerge' filter.

There are many ways this can be implemented, adjusted, and improved. In the example above, everything is done within one filtergraph, but it can be separated out into multiple passes (this would be useful for manually fixing errors in the mask). [N.B. Timing can be an issue when running this all in a single filtergraph (where the mask layer didn't match up with the overlay). 29.97fps videos proved particularly troublesome. Repeated use of 'setpts=PTS' in filter graph might help, but it this case, it was fixed by converting the video to 25fps beforehand.]

UPDATE: 2020-05-05

There is some recurring confusion over what I wrote about stacking multiple frames for the background image. It's really not that important; it's just something to help create a more general/average background image by image stacking.





# Image stacking with FFmpeg usinf 'tmix' filter.
# More info on 'tmix' filter: https://ffmpeg.org/ffmpeg-filters.html#tmix
ffmpeg -i background-frame%d.png -vf tmix=frames=3 stacked.png
 

# Image stacking is also possible with ImageMagick
convert *.png -evaluate-sequence mean stacked.png

ffmpeg maskedmerge: https://ffmpeg.org/ffmpeg-filters.html#maskedmerge
source video: ぷに (Puni) https://www.youtube.com/watch?v=B0o8cQa-Kd8
Discussion of technique on twitter: https://twitter.com/alihaydarglc/status/982950986175209472

FFmpeg: Create a video composite of colourised macroblock motion-vectors


# Generate video motion vectors, in various colours, and merge together
# NB: Includes fixed 'curve' filters for issue outlined in blog post

ffplay \
   -flags2 +export_mvs \
   -i video.mkv \
   -vf \
      "
         split=3 [original][original1][vectors];
         [vectors] codecview=mv=pf+bf+bb [vectors];
         [vectors][original] blend=all_mode=difference128,
            eq=contrast=7:brightness=-0.3,
            split=3 [yellow][pink][black];
         [yellow] curves=r='0/0 0.1/0.5 1/1':
                         g='0/0 0.1/0.5 1/1':
                         b='0/0 0.4/0.5 1/1' [yellow];
         [pink] curves=r='0/0 0.1/0.5 1/1':
                       g='0/0 0.1/0.3 1/1':
                       b='0/0 0.1/0.3 1/1' [pink];
         [original1][yellow] blend=all_expr=if(gt(X\,Y*(W/H))\,A\,B) [yellorig];
         [pink][black] blend=all_expr=if(gt(X\,Y*(W/H))\,A\,B) [pinkblack];
         [pinkblack][yellorig]blend=all_expr=if(gt(X\,W-Y*(W/H))\,A\,B)
      "

# Process:
# 1: Three copies of input video are made
# 2: Motion vectors are applied to one stream
# 3: The result of #2 is 'difference128' blended with an original video stream
#    The brightness and contrast are adjusted to improve clarity
#    Three copies of this vectors result are made
# 4: Curves are applied to one vectors stream to create yellow colour
# 5: Curves are applied to another vectors stream to create pink colour
# 6: Original video stream and yellow vectors are combined diagonally
# 7: Pink vectors stream and original vectors stream are combined diagonally
# 8: The results of #6 and #7 are combined diagonally (opposite direction)

NB: At time of writing, the latest version of FFmpeg (N-81396-g0d8b6a1) has a bug (feature?) where upper and lower bounds of 'curves' filter must be set for accurate results. This is contrary to what's written in official documentation.

alternate version:


see related: http://oioiiooixiii.blogspot.com/2016/04/ffmpeg-display-and-isolate-macroblock.html
source video: θΆ³ε€ͺγΊγ‚“γŸ (Asibuto Penta) https://www.youtube.com/watch?v=Djdm7NaQheU

FFmpeg: Video Stabilisation using 'libvidstab'


It is possible to stablise video with standard FFmpeg using the 'deshake' filter, which can produce satisfactory results¹. Another option is to use FFmpeg with the 'vid.stab' library.

In the video above, a side-by-side comparison is made between the original video, and the 'vid.stab' stablised version. The subject matter remains still, while the video content floats around the frame. This is achieved by setting the 'zoom' to a negative value, 'optzoom' to 0, and setting 'relative' to 1. This is not typically desired in most instances, as it creates unusual framing. However, it does mean that no picture information is lost in the process. Note also, how missing information is replaced by the content of previous frames². The other option is to leave these areas black. Further settings information found at Georg Martius's website.

I've created a Bash script to aid in setting the values for video stabilisation (link at end of post). It was intended as a way of getting to grips with different settings, rather than a final application. It initialises a crude interface using Zenity, however all options can be set with this quickly, and it will build complete filters for the first and second pass. It also creates a video using FFmpeg's default values for MKV files. It produces a rudimentary log file, as follows:

**** Wed Jul  6 17:02:07 IST 2016 ****
ARRAY VALUES: |10||||||||200|||0|||1|-50|0||||
vidstabdetect=result=transforms.trf:shakiness=10:accuracy=15:stepsize=6:mincontrast=0.3:tripod=0:show=0
vidstabtransform=input=transforms.trf:smoothing=200:optalgo=gauss:maxshift=-1:maxangle=0:crop=keep:invert=0:relative=1:zoom=-50:optzoom=0:zoomspeed=0.25:interpol=bilinear:tripod=0:debug=0

# Info:
# 1: Time and date of specific filtering. The filter choices of each run on a video gets added to the same log file.
# 2: Clearly shows the user specified values for filtering (blanks between '|' symbols indicate default value used)
# 3: Filtergraph used for first pass
# 4: Filtergraph used for second pass

To use 'vib.stab' features in FFmpeg, FFmpeg must be compiled using the following procedures (correct as of this post's date)

# Using the FFmpeg compilation method for GNU/Linux, found here
# https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

# Add (or complete) the following to pre-compile procedures
# ---------------------------------------------------------
cd ~/ffmpeg_sources
wget -O vid-stab-master.tar.gz https://github.com/georgmartius/vid.stab/tarball/master
tar xzvf vid-stab-master.tar.gz
cd *vid.stab*
cmake .
make
sudo make install
# ---------------------------------------------------------

# When compiling FFmpeg, include '--enable-libvidstab' in './configure PATH'

# Create necessary symlinks to 'libvidstab.so' automatically by running
sudo ldconfig

On a final note, vid.stab refuses to work with videos of certain pixel formats, so I encoded all test video as 'yuv420p' which worked without a problem.


¹ Deshake has an advantage over vid.stab, in that it allows setting a region for motion search.
² In some other systems (like 'Deshaker' for VirtualDub ) missing frame information can be interpolated from bi-directional frame analysis.

bash script: http://oioiiooixiii.blogspot.com/p/context-download-binbashset-e-x-script.html

vid.stab home: http://public.hronopik.de/vid.stab/features.php
vid.stab github: https://github.com/georgmartius/vid.stab
initial reading: https://www.epifocal.net/blog/video-stabilization-with-ffmpeg
source video: γˆγ‚“γγ (Ensoku) https://www.youtube.com/watch?v=jWdQMgBlXEo

TOIlet (toilet) & FFmpeg: Capturing (formatted) terminal text output as video



for i in {0..9}; do echo "WONDERFUL $i"; done \
| toilet --gay \
| ffmpeg -f tty -i - tty-out.gif

# — oioiiooixiii {gifs} (@oioiiooixiii_) June 15, 2016

To use TOIlet formated text on a webpage:

                                                      ▝       ▝   ▝           ▝       ▝   ▝   ▝    ▄▖ ▗▄   ▄▖ ▗▄  ▗▄   ▄▖  ▄▖ ▗▄  ▗ ▗ ▗▄  ▗▄  ▗▄   ▐▘▜  ▐  ▐▘▜  ▐   ▐  ▐▘▜ ▐▘▜  ▐   ▙▌  ▐   ▐   ▐   ▐ ▐  ▐  ▐ ▐  ▐   ▐  ▐ ▐ ▐ ▐  ▐   ▟▖  ▐   ▐   ▐   ▝▙▛ ▗▟▄ ▝▙▛ ▗▟▄ ▗▟▄ ▝▙▛ ▝▙▛ ▗▟▄ ▗▘▚ ▗▟▄ ▗▟▄ ▗▟▄                                                                                                   
# Basic syntax for html output
toilet -f smmono9 --html oioiiooixiii
                                                                                             ▗▄    ▗▄  ▗▄      ▗▄    ▗▄  ▗▄  ▗▄   ▐▘    ▐▘       ▐▘ ▐▘                                                ▝▙ ▗▟ ▝▙ ▗▟ ▗▟ ▝▙ ▝▙ ▗▟ ▗▘ ▗▟ ▗▟ ▗▟                                                                                                   
# Strips out <br /> tags to improve formatting in blogger
htmlText="$(toilet -f smmono9 --gay --html oioiiooixiii)"
echo "${htmlText//"<br />"/""}"
more info: http://libcaca.zoy.org/toilet.html