-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request #45
Comments
I have never written in python (I do mostly web design and php) but I am willing to give this a try, pointers/hand holding would be appreciated. import bpy
import numpy as np
''' ***a bezier object/function needs to be defined based on the argument parameter values '''
def main():
'''Here's an example usage of these functions'''
x = 5.0 '''***this value is the frame number variable'''
bez_obj = bpy.data.objects['BezierCurve']
print('The y-value is', getYfromXforBezierObject(bez_obj, x))
def getYfromXforBezierObject(bez_obj, x):
'''Given a Bezier object which, when projected on to xy-plane, is a
well-behaved function (i.e. each x-value has only one associated y-value),
this returns the y-value for a given x-value
The higher resolution your curve is, the more accurate the resultant y-value
will be.'''
#find appropriate segment
segment_i = determineBezSegment(bez_obj, x)
if len(bez_obj.data.splines) > 1:
print("WARNING: your Bezier object has multiple splines!")
spline = bez_obj.data.splines[0]
#get the four points that control the cubic bezier segment
P0 = spline.bezier_points[segment_i].co[:2]
P1 = spline.bezier_points[segment_i].handle_right[:2]
P2 = spline.bezier_points[segment_i+1].handle_left[:2]
P3 = spline.bezier_points[segment_i+1].co[:2]
y = getYfromXforBezSegment(P0, P1, P2, P3, x)
return y
def determineBezSegment(bez_obj, x):
'''Given a Bezier object and an x-value, determine which of the cubic
segments corresponds to that x-value.
A return value of 0 would represent the first segment.'''
print(bez_obj.data)
bez_points = bez_obj.data.splines[0].bezier_points
if len(bez_obj.data.splines) > 1:
print("WARNING: your Bezier object has multiple splines!")
#loop through all segments and find which one contains x
for segment_i in range(len(bez_points)-1):
start_point = bez_points[segment_i]
end_point = bez_points[segment_i+1]
#break if this segment contains x
if start_point.co[0] <= x < end_point.co[0]:
break
#check if no segment was found
else:
print("WARNING: no segment found for given x-value and Bezier object")
return None
return segment_i
def getYfromXforBezSegment(P0, P1, P2, P3, x):
'''For a cubic Bezier segment described by the 2-tuples P0, ..., P3, return
the y-value associated with the given x-value.
Ex: getXfromYforCubicBez((0,0), (1,1), (2,1), (2,2), 3.2)'''
#First, get the t-value associated with x-value, where t is the
#parameterization of the Bezier curve and ranges from 0 to 1.
#We need the coefficients of the polynomial describing cubic Bezier
#(cubic polynomial in t)
coefficients = [-P0[0] + 3*P1[0] - 3*P2[0] + P3[0],
3*P0[0] - 6*P1[0] + 3*P2[0],
-3*P0[0] + 3*P1[0],
P0[0] - x]
#find roots of this polynomial to determine the parameter t
roots = np.roots(coefficients)
#find the root which is between 0 and 1, and is also real
correct_root = None
for root in roots:
if np.isreal(root) and 0 <= root <= 1:
correct_root = root
#check to make sure a valid root was found
if correct_root is None:
print('Error, no valid root found. Are you sure your Bezier curve '
'represents a valid function when projected into the xy-plane?')
param_t = correct_root
#from our value for the t parameter, find the corresponding y-value using formula for
#cubic Bezier curves
y = (1-param_t)**3*P0[1] + 3*(1-param_t)**2*param_t*P1[1] + 3*(1-param_t)*param_t**2*P2[1] + param_t**3*P3[1]
assert np.isreal(y)
# typecast y from np.complex128 to float64
y = y.real
return y
if __name__ == '__main__':
main() If I understand correctly then the x axis variable in the bezier would be the frame number and the y axis range would be the difference between the beginning and ending argument values. To implement this the following arguments would need to be added(the start values could just be the original arguments): Or maybe the better way would be to implement parameters in the existing arguments so for instance octaves would have optional additional parameters of: value1(already implemented) value2(ending value), x1, x1, y1, y2 (these are the parameters that define the bezier curve and maybe should be p0, p1, p2, p3 to match the above code) We already have the x value which is the frame number and the Y range will be the difference between value1 and value2. the above function returns y which is the new argument value for whatever given frame we are on. Please advise on the next best step for implementing this in your code:) |
so first step add nargs=6 to the above mentioned arguments. second step in the following code I need to add a function to say: modify below code octave_n = octaves_new instead of octaves for i in xrange(frame_i, nrframes):
print('Processing frame #{}').format(frame_i)
#Choosing Layer
if layers == 'customloop': #loop over layers as set in layersloop array
endparam = layersloop[frame_i % len(layersloop)]
else: #loop through layers one at a time until this specific layer
endparam = layers[frame_i % len(layers)]
#Choosing between normal dreaming, and guided dreaming
if guide_image is None:
frame = deepdream(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam)
else:
guide = np.float32(PIL.Image.open(guide_image))
print('Setting up Guide with selected image')
guide_features = prepare_guide(net,PIL.Image.open(guide_image), end=endparam)
frame = deepdream_guided(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves, octave_scale = octave_scale, jitter=jitter, end = endparam, objective_fn=objective_guide, guide_features=guide_features,)
saveframe = output + "/%08d.%s" % (frame_i, image_type) Am I missing anything? will this work? for a visualization of what the cubic bezier will do see http://cubic-bezier.com/#0,.75,.43,1 |
@jeremiahlamontagne yes seems good to me and worth trying :) what would be nice is have a choice between the transitions: linear, or bezier, and if bezier possibly what kind of bezier? or a separate "momentum" value, positive: speeds up and then slows, or negative, starts slow and then speeds up. Kinda like the ease, linear, ease-in, ease-out, ease-in-out params in the link you send. |
ok cool I am going to work on just getting it working, next week. For now my goal is to just get it to a point where I can make my own bezier curve (probably from that link) and manually putting in the arg parameters in the command line. Once that is working I can consider putting in presets, like standard css ease in/out. Thanks for the quick response I will update my progress next week. |
@jeremiahlamontagne k good stuff! looking forward to see what you come up with :) |
Still messy, but proof of concept. It does not work as well with octaves since they move in whole numbers, so I am having to round and it will every x frames bump up the octave by 1. I am considering rethinking this maybe by coupling octaves and iterations. edit sorry I also added guided images based on my original frames so that would break it for anyone who is not me (change the guide image code back to the original), also added itteration to the code below. Also sorry for sucking at github and pasting so much redundant code below. I haven't taken the time to figure out how to do the diff thing. edit2 I also have a coordinate pair screwed up in the below code. running tests now #!/usr/bin/python
__author__ = 'graphific'
import argparse
import os, os.path
import errno
import sys
import time
from random import randint
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from google.protobuf import text_format
import caffe
def round_to(n, precision):
correction = 0.5 if n >= 0 else -0.5
return int( n/precision+correction ) * precision
def getYfromXforBezSegment(P0, P1, P2, P3, x):
'''For a cubic Bezier segment described by the 2-tuples P0, ..., P3, return
the y-value associated with the given x-value.
Ex: getXfromYforCubicBez((0,0), (1,1), (2,1), (2,2), 3.2)'''
#First, get the t-value associated with x-value, where t is the
#parameterization of the Bezier curve and ranges from 0 to 1.
#We need the coefficients of the polynomial describing cubic Bezier
#(cubic polynomial in t)
coefficients = [-P0[0] + 3*P1[0] - 3*P2[0] + P3[0],
3*P0[0] - 6*P1[0] + 3*P2[0],
-3*P0[0] + 3*P1[0],
P0[0] - x]
print(coefficients)
#find roots of this polynomial to determine the parameter t
roots = np.roots(coefficients)
#find the root which is between 0 and 1, and is also real
correct_root = None
for root in roots:
if np.isreal(root) and 0 <= root <= x:
correct_root = root
#check to make sure a valid root was found
if correct_root is None:
print('Error, no valid root found. Are you sure your Bezier curve '
'represents a valid function when projected into the xy-plane?')
param_t = correct_root
#from our value for the t parameter, find the corresponding y-value using formula for
#cubic Bezier curves
y = (1-param_t)**3*P0[1] + 3*(1-param_t)**2*param_t*P1[1] + 3*(1-param_t)*param_t**2*P2[1] + param_t**3*P3[1]
assert np.isreal(y)
# typecast y from np.complex128 to float64
y = y.real
return y
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def showarrayHQ(a, fmt='png'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
#print np.float32(img).shape
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
def objective_L2(dst):
dst.diff[:] = dst.data
#objective for guided dreaming
def objective_guide(dst,guide_features):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
#from https://github.com/jrosebr1/bat-country/blob/master/batcountry/batcountry.py
def prepare_guide(net, image, end="inception_4c/output", maxW=224, maxH=224):
# grab dimensions of input image
(w, h) = image.size
# GoogLeNet was trained on images with maximum width and heights
# of 224 pixels -- if either dimension is larger than 224 pixels,
# then we'll need to do some resizing
if h > maxH or w > maxW:
# resize based on width
if w > h:
r = maxW / float(w)
# resize based on height
else:
r = maxH / float(h)
# resize the image
(nW, nH) = (int(r * w), int(r * h))
image = np.float32(image.resize((nW, nH), PIL.Image.BILINEAR))
(src, dst) = (net.blobs["data"], net.blobs[end])
src.reshape(1, 3, nH, nW)
src.data[0] = preprocess(net, image)
net.forward(end=end)
guide_features = dst.data[0].copy()
return guide_features
# -------
# Make dreams
# -------
def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter + 1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
dst.diff[:] = dst.data # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size / np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', verbose = 1, clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0 / octave_scale, 1.0 / octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0 * h / h1, 1.0 * w / w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis * (255.0 / np.percentile(vis, 99.98))
if verbose == 3:
if image_type == "png":
showarrayHQ(vis)
elif image_type == "jpg":
showarray(vis)
print(octave, i, end, vis.shape)
clear_output(wait=True)
elif verbose == 2:
print(octave, i, end, vis.shape)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
# --------------
# Guided Dreaming
# --------------
def make_step_guided(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective_fn=objective_guide, **objective_params):
'''Basic gradient ascent step.'''
#if objective_fn is None:
# objective_fn = objective_L2
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective_fn(dst, **objective_params) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream_guided(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', clip=True, verbose=1, objective_fn=objective_guide, **step_params):
#if objective_fn is None:
# objective_fn = objective_L2
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step_guided(net, end=end, clip=clip, objective_fn=objective_fn, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
if verbose == 3:
if image_type == "png":
showarrayHQ(vis)
elif image_type == "jpg":
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
elif verbose == 2:
print octave, i, end, vis.shape
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
def resizePicture(image,width):
img = PIL.Image.open(image)
basewidth = width
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
return img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
def morphPicture(filename1,filename2,blend,width):
img1 = PIL.Image.open(filename1)
img2 = PIL.Image.open(filename2)
if width is not 0:
img2 = resizePicture(filename2,width)
return PIL.Image.blend(img1, img2, blend)
def make_sure_path_exists(path):
'''
make sure input and output directory exist, if not create them.
If another error (permission denied) throw an error.
'''
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
layersloop = ['conv2/norm2', 'inception_3a/3x3_reduce']
# layersloop = ['inception_4c/output', 'inception_4d/output',
# 'inception_4e/output', 'inception_5a/output',
# 'inception_5b/output', 'inception_5a/output',
# 'inception_4e/output', 'inception_4d/output',
# 'inception_4c/output']
def main(input, output, image_type, gpu, model_path, model_name, preview, octaves, octave_scale, iterations, jitter, zoom, stepsize, blend, layers, guide_image, start_frame, end_frame, verbose):
make_sure_path_exists(input)
make_sure_path_exists(output)
# let max nr of frames
nrframes =len([name for name in os.listdir(input) if os.path.isfile(os.path.join(input, name))])
if nrframes == 0:
print("no frames to process found")
sys.exit(0)
if preview is None: preview = 0
if octaves is None: octaves = 4
if octave_scale is None: octave_scale = 1.5
if iterations is None: iterations = 5
if jitter is None: jitter = 32
if zoom is None: zoom = 1
if stepsize is None: stepsize = 1.5
if blend is None: blend = 0.5 #can be nr (constant), random, or loop
if verbose is None: verbose = 1
if layers is None: layers = 'customloop' #['inception_4c/output']
if start_frame is None:
frame_i = 1
else:
frame_i = int(start_frame)
if not end_frame is None:
nrframes = int(end_frame)+1
else:
nrframes = nrframes+1
#Load DNN
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + model_name #'bvlc_googlenet.caffemodel'
# Patching model to be able to compute gradients.
# Note that you can also manually add "force_backward: true" line to "deploy.prototxt".
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
if gpu is None:
print("SHITTTTTTTTTTTTTT You're running CPU man =D")
else:
caffe.set_mode_gpu()
caffe.set_device(int(args.gpu))
print("GPU mode [device id: %s]" % args.gpu)
print("using GPU, but you'd still better make a cup of coffee")
if verbose == 3:
from IPython.display import clear_output, Image, display
print("display turned on")
frame = np.float32(PIL.Image.open(input + '/%08d.%s' % (frame_i, image_type) ))
if preview is not 0:
frame = np.float32(resizePicture(input + '/%08d.%s' % (frame_i, image_type), preview))
now = time.time()
if blend == 'loop':
blend_forward = True
blend_at = 0.4
blend_step = 0.1
for i in xrange(frame_i, nrframes):
print('Processing frame #{}').format(frame_i)
#Choosing Layer
if layers == 'customloop': #loop over layers as set in layersloop array
endparam = layersloop[frame_i % len(layersloop)]
else: #loop through layers one at a time until this specific layer
endparam = layers[frame_i % len(layers)]
#Look for Bezier curves
if len(octaves) == 6:
print('octaves is 6')
print(octaves)
x = frame_i
print(x)
P0x = int(0)
P0y = int(0)
P1x = int(nrframes-1)
print(P1x)
P1y = int(octaves[1])
P0 = (P0x, P0y)
P1 = (P1x, P1y)
P2 = (int(octaves[2]), int(octaves[3]))
P3 = (int(octaves[4]), int(octaves[5]))
octaves_new = round_to(getYfromXforBezSegment(P0, P1, P2, P3, x), 1)
print('octn:')
print(getYfromXforBezSegment(P0, P1, P2, P3, x))
print(octaves_new)
else: print('fail')
if len(iterations) == 6:
print('iterations is 6')
print(iterations)
x = frame_i
print(x)
P0x = int(0)
P0y = int(0)
P1x = int(nrframes-1)
print(P1x)
P1y = int(iterations[1])
P0 = (P0x, P0y)
P1 = (P1x, P1y)
P2 = (int(iterations[2]), int(iterations[3]))
P3 = (int(iterations[4]), int(iterations[5]))
iterations_new = round_to(getYfromXforBezSegment(P0, P1, P2, P3, x), 1)
print('itrn:')
print(getYfromXforBezSegment(P0, P1, P2, P3, x))
print(iterations_new)
else: print('fail')
#Choosing between normal dreaming, and guided dreaming
if guide_image is None:
frame = deepdream(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves_new, octave_scale = octave_scale, jitter=jitter, end = endparam)
else:
guide = np.float32(PIL.Image.open(input + '/resize/tr_%01d.%s' % (frame_i, image_type) ))
print('Setting up Guide with selected image')
guide_name = input + '/resize/tr_%01d.%s' % (frame_i, image_type)
print(guide_name)
guide_features = np.float32(PIL.Image.open(input + '/resize/tr_%01d.%s' % (frame_i, image_type) ), end=endparam)
frame = deepdream_guided(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations, step_size = stepsize, octave_n = octaves_new, octave_scale = octave_scale, jitter=jitter, end = endparam, objective_fn=objective_guide, guide_features=guide_features,)
saveframe = output + "/%08d.%s" % (frame_i, image_type)
later = time.time()
difference = int(later - now)
# Stats (stolen + adapted from Samim: https://github.com/samim23/DeepDreamAnim/blob/master/dreamer.py)
print '***************************************'
print 'Saving Image As: ' + saveframe
print 'Frame ' + str(i) + ' of ' + str(nrframes-1)
print 'Frame Time: ' + str(difference) + 's'
timeleft = difference * (nrframes - frame_i)
m, s = divmod(timeleft, 60)
h, m = divmod(m, 60)
print 'Estimated Total Time Remaining: ' + str(timeleft) + 's (' + "%d:%02d:%02d" % (h, m, s) + ')'
print '***************************************'
PIL.Image.fromarray(np.uint8(frame)).save(saveframe)
newframe = input + "/%08d.%s" % (frame_i,image_type)
if blend == 0:
newimg = PIL.Image.open(newframe)
if preview is not 0:
newimg = resizePicture(newframe,preview)
frame = newimg
else:
if blend == 'random':
blendval=randint(5,10)/10.
elif blend == 'loop':
if blend_at > 1 - blend_step: blend_forward = False
elif blend_at <= 0.5: blend_forward = True
if blend_forward: blend_at += blend_step
else: blend_at -= blend_step
blendval = blend_at
else: blendval = float(blend)
frame = morphPicture(saveframe,newframe,blendval,preview)
frame = np.float32(frame)
now = time.time()
frame_i += 1
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Dreaming in videos.')
parser.add_argument(
'-i','--input',
help='Input directory where extracted frames are stored',
required=True)
parser.add_argument(
'-o','--output',
help='Output directory where processed frames are to be stored',
required=True)
parser.add_argument(
'-it','--image_type',
help='Specify whether jpg or png ',
required=True)
parser.add_argument(
"--gpu",
default= None,
help="Switch for gpu computation."
) #int can chose index of gpu, if there are multiple gpu's to chose from
parser.add_argument(
'-t', '--model_path',
dest='model_path',
default='../caffe/models/bvlc_googlenet/',
help='Model directory to use')
parser.add_argument(
'-m', '--model_name',
dest='model_name',
default='bvlc_googlenet.caffemodel',
help='Caffe Model name to use')
parser.add_argument(
'-p','--preview',
type=int,
required=False,
help='Preview image width. Default: 0')
parser.add_argument(
'-oct','--octaves',
nargs=6,
type=int,
required=False,
help='Octaves. Default: 4')
parser.add_argument(
'-octs','--octavescale',
type=float,
required=False,
help='Octave Scale. Default: 1.4',)
parser.add_argument(
'-itr','--iterations',
type=int,
required=False,
help='Iterations. Default: 10')
parser.add_argument(
'-j','--jitter',
type=int,
required=False,
help='Jitter. Default: 32')
parser.add_argument(
'-z','--zoom',
type=int,
required=False,
help='Zoom in Amount. Default: 1')
parser.add_argument(
'-s','--stepsize',
type=float,
required=False,
help='Step Size. Default: 1.5')
parser.add_argument(
'-b','--blend',
type=str,
required=False,
help='Blend Amount. Default: "0.5" (constant), or "loop" (0.5-1.0), or "random"')
parser.add_argument(
'-l','--layers',
nargs="+",
type=str,
required=False,
help='Array of Layers to loop through. Default: [customloop] \
- or choose ie [inception_4c/output] for that single layer')
parser.add_argument(
'-v', '--verbose',
type=int,
required=False,
help="verbosity [0-3]")
parser.add_argument(
'-gi', '--guide_image',
required=False,
help="path to guide image")
parser.add_argument(
'-sf', '--start_frame',
type=int,
required=False,
help="starting frame nr")
parser.add_argument(
'-ef', '--end_frame',
type=int,
required=False,
help="end frame nr")
args = parser.parse_args()
if not args.model_path[-1] == '/':
args.model_path = args.model_path + '/'
if not os.path.exists(args.model_path):
print("Model directory not found")
print("Please set the model_path to a correct caffe model directory")
sys.exit(0)
model = os.path.join(args.model_path, args.model_name)
if not os.path.exists(model):
print("Model not found")
print("Please set the model_name to a correct caffe model")
print("or download one with ./caffe_dir/scripts/download_model_binary.py caffe_dir/models/bvlc_googlenet")
sys.exit(0)
main(args.input, args.output, args.image_type, args.gpu, args.model_path, args.model_name, args.preview, args.octaves, args.octavescale, args.iterations, args.jitter, args.zoom, args.stepsize, args.blend, args.layers, args.guide_image, args.start_frame, args.end_frame, args.verbose) |
@jeremiahlamontagne looking nice, would love to see some example or something, and feel free to make a pull request when you're up for releasing a stable enough version :) |
it is working. The difference is not as pronounced as I might have thought in some cases, I think it works very well with scaling iterations though. I will upload 2 examples of video with the new command line parameters, as you can see my code is pretty ugly since this is my first time working in python, we will see about the pull request. |
@jeremiahlamontagne awesome! looking forward to it. |
gah it is not working. For some reason the function is returning values outside the range. I don't know quite enough about cubic polynomials to fix the math from the blender example, it may work but import bpy is only available from inside blenders interpreter. so I am trying a different method using the following code as the base for calculating y=f(x). def bernstein_poly(i, n, t):
"""
The Bernstein polynomial of n, i as a function of t
"""
return comb(n, i) * ( t**(n-i) ) * (1 - t)**i
def bezier_curve(points, nTimes=2675):
# """
# Given a set of control points, return the
# bezier curve defined by the control points.
# points should be a list of lists, or list of tuples
# such as [ [1,1],
# [2,3],
# [4,5], ..[Xn, Yn] ]
# nTimes is the number of time steps, defaults to 1000
# See http://processingjs.nihongoresources.com/bezierinfo/
# """
nPoints = len(points)
xPoints = np.array([p[0] for p in points])
yPoints = np.array([p[1] for p in points])
t = np.linspace(0.0, 1.0, nTimes)
polynomial_array = np.array([ bernstein_poly(i, nPoints-1, t) for i in range(0, nPoints) ])
xvals = np.dot(xPoints, polynomial_array)
yvals = np.dot(yPoints, polynomial_array)
return xvals, yvals
if __name__ == "__main__":
from matplotlib import pyplot as plt
points = [(0,0), (0,4), (2675,4)]
xpoints = [p[0] for p in points]
ypoints = [p[1] for p in points]
xvals, yvals = bezier_curve(points, nTimes=2675)
# plt.plot(xvals, yvals)
# plt.plot(xpoints, ypoints, "ro")
# for nr in range(len(points)):
# plt.text(points[nr][0], points[nr][1], nr)
# plt.show()
print(xvals)
print(yvals) there is only one easing factor here, but if I want to have ease in and ease out I could divide the movie in half and add another set of coordinates. better to have limited functionality that functions properly. |
#!/usr/bin/python
__author__ = 'graphific'
import argparse
import os, os.path
import errno
import sys
import time
from random import randint
from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
from google.protobuf import text_format
from scipy.misc import comb
import caffe
def round_to(n, precision):
correction = 0.5 if n >= 0 else -0.5
return int( n/precision+correction ) * precision
def fullprint(*args, **kwargs):
from pprint import pprint
import numpy
opt = numpy.get_printoptions()
numpy.set_printoptions(threshold='nan')
pprint(*args, **kwargs)
numpy.set_printoptions(**opt)
def bernstein_poly(i, n, t):
"""
The Bernstein polynomial of n, i as a function of t
"""
return comb(n, i) * ( t**(n-i) ) * (1 - t)**i
def bezier_curve(points, nTimes=2675):
# """
# Given a set of control points, return the
# bezier curve defined by the control points.
# points should be a list of lists, or list of tuples
# such as [ [1,1],
# [2,3],
# [4,5], ..[Xn, Yn] ]
# nTimes is the number of time steps, defaults to 1000
# See http://processingjs.nihongoresources.com/bezierinfo/
# """
nPoints = len(points)
xPoints = np.array([p[0] for p in points])
yPoints = np.array([p[1] for p in points])
t = np.linspace(0.0, 1.0, nTimes)
polynomial_array = np.array([ bernstein_poly(i, nPoints-1, t) for i in range(0, nPoints) ])
xvals = np.dot(xPoints, polynomial_array)
yvals = np.dot(yPoints, polynomial_array)
return xvals, yvals
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def showarrayHQ(a, fmt='png'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
# a couple of utility functions for converting to and from Caffe's input image layout
def preprocess(net, img):
#print np.float32(img).shape
return np.float32(np.rollaxis(img, 2)[::-1]) - net.transformer.mean['data']
def deprocess(net, img):
return np.dstack((img + net.transformer.mean['data'])[::-1])
def objective_L2(dst):
dst.diff[:] = dst.data
#objective for guided dreaming
def objective_guide(dst,guide_features):
x = dst.data[0].copy()
y = guide_features
ch = x.shape[0]
x = x.reshape(ch,-1)
y = y.reshape(ch,-1)
A = x.T.dot(y) # compute the matrix of dot-products with guide features
dst.diff[0].reshape(ch,-1)[:] = y[:,A.argmax(1)] # select ones that match best
#from https://github.com/jrosebr1/bat-country/blob/master/batcountry/batcountry.py
def prepare_guide(net, image, end="inception_4c/output", maxW=224, maxH=224):
# grab dimensions of input image
(w, h) = image.size
# GoogLeNet was trained on images with maximum width and heights
# of 224 pixels -- if either dimension is larger than 224 pixels,
# then we'll need to do some resizing
if h > maxH or w > maxW:
# resize based on width
if w > h:
r = maxW / float(w)
# resize based on height
else:
r = maxH / float(h)
# resize the image
(nW, nH) = (int(r * w), int(r * h))
image = np.float32(image.resize((nW, nH), PIL.Image.BILINEAR))
(src, dst) = (net.blobs["data"], net.blobs[end])
src.reshape(1, 3, nH, nW)
src.data[0] = preprocess(net, image)
net.forward(end=end)
guide_features = dst.data[0].copy()
return guide_features
# -------
# Make dreams
# -------
def make_step(net, step_size=1.5, end='inception_4c/output', jitter=32, clip=True):
'''Basic gradient ascent step.'''
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter + 1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
dst.diff[:] = dst.data # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size / np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', verbose = 1, clip=True, **step_params):
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n - 1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0 / octave_scale, 1.0 / octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0 * h / h1, 1.0 * w / w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
print(iter_n)
for i in xrange(iter_n):
make_step(net, end=end, clip=clip, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis * (255.0 / np.percentile(vis, 99.98))
if verbose == 3:
if image_type == "png":
showarrayHQ(vis)
elif image_type == "jpg":
showarray(vis)
print(octave, i, end, vis.shape)
clear_output(wait=True)
elif verbose == 2:
print(octave, i, end, vis.shape)
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
# --------------
# Guided Dreaming
# --------------
def make_step_guided(net, step_size=1.5, end='inception_4c/output',
jitter=32, clip=True, objective_fn=objective_guide, **objective_params):
'''Basic gradient ascent step.'''
#if objective_fn is None:
# objective_fn = objective_L2
src = net.blobs['data'] # input image is stored in Net's 'data' blob
dst = net.blobs[end]
ox, oy = np.random.randint(-jitter, jitter+1, 2)
src.data[0] = np.roll(np.roll(src.data[0], ox, -1), oy, -2) # apply jitter shift
net.forward(end=end)
objective_fn(dst, **objective_params) # specify the optimization objective
net.backward(start=end)
g = src.diff[0]
# apply normalized ascent step to the input image
src.data[:] += step_size/np.abs(g).mean() * g
src.data[0] = np.roll(np.roll(src.data[0], -ox, -1), -oy, -2) # unshift image
if clip:
bias = net.transformer.mean['data']
src.data[:] = np.clip(src.data, -bias, 255-bias)
def deepdream_guided(net, base_img, image_type, iter_n=10, octave_n=4, octave_scale=1.4, end='inception_4c/output', clip=True, verbose=1, objective_fn=objective_guide, **step_params):
#if objective_fn is None:
# objective_fn = objective_L2
# prepare base images for all octaves
octaves = [preprocess(net, base_img)]
for i in xrange(octave_n-1):
octaves.append(nd.zoom(octaves[-1], (1, 1.0/octave_scale,1.0/octave_scale), order=1))
src = net.blobs['data']
detail = np.zeros_like(octaves[-1]) # allocate image for network-produced details
for octave, octave_base in enumerate(octaves[::-1]):
h, w = octave_base.shape[-2:]
if octave > 0:
# upscale details from the previous octave
h1, w1 = detail.shape[-2:]
detail = nd.zoom(detail, (1, 1.0*h/h1,1.0*w/w1), order=1)
src.reshape(1,3,h,w) # resize the network's input image size
src.data[0] = octave_base+detail
for i in xrange(iter_n):
make_step_guided(net, end=end, clip=clip, objective_fn=objective_fn, **step_params)
# visualization
vis = deprocess(net, src.data[0])
if not clip: # adjust image contrast if clipping is disabled
vis = vis*(255.0/np.percentile(vis, 99.98))
if verbose == 3:
if image_type == "png":
showarrayHQ(vis)
elif image_type == "jpg":
showarray(vis)
print octave, i, end, vis.shape
clear_output(wait=True)
elif verbose == 2:
print octave, i, end, vis.shape
# extract details produced on the current octave
detail = src.data[0]-octave_base
# returning the resulting image
return deprocess(net, src.data[0])
def resizePicture(image,width):
img = PIL.Image.open(image)
basewidth = width
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
return img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
def morphPicture(filename1,filename2,blend,width):
img1 = PIL.Image.open(filename1)
img2 = PIL.Image.open(filename2)
if width is not 0:
img2 = resizePicture(filename2,width)
return PIL.Image.blend(img1, img2, blend)
def make_sure_path_exists(path):
'''
make sure input and output directory exist, if not create them.
If another error (permission denied) throw an error.
'''
try:
os.makedirs(path)
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
# layersloop = ['conv2/norm2', 'inception_3a/3x3_reduce']
layersloop = ['inception_4c/output', 'inception_4d/output',
'inception_4e/output', 'inception_5a/output',
'inception_5b/output', 'inception_5a/output',
'inception_4e/output', 'inception_4d/output',
'inception_4c/output']
def main(input, output, image_type, gpu, model_path, model_name, preview, octaves, octave_scale, iterations, jitter, zoom, stepsize, blend, layers, guide_image, start_frame, end_frame, verbose):
make_sure_path_exists(input)
make_sure_path_exists(output)
# let max nr of frames
nrframes =len([name for name in os.listdir(input) if os.path.isfile(os.path.join(input, name))])
if nrframes == 0:
print("no frames to process found")
sys.exit(0)
if preview is None: preview = 0
if octaves is None: octaves = 4
if octave_scale is None: octave_scale = 1.5
if iterations is None: iterations = 5
if jitter is None: jitter = 32
if zoom is None: zoom = 1
if stepsize is None: stepsize = 1.5
if blend is None: blend = 0.5 #can be nr (constant), random, or loop
if verbose is None: verbose = 1
if layers is None: layers = 'customloop' #['inception_4c/output']
if start_frame is None:
frame_i = 1
else:
frame_i = int(start_frame)
if not end_frame is None:
nrframes = int(end_frame)+1
else:
nrframes = nrframes+1
#Load DNN
net_fn = model_path + 'deploy.prototxt'
param_fn = model_path + model_name #'bvlc_googlenet.caffemodel'
# Patching model to be able to compute gradients.
# Note that you can also manually add "force_backward: true" line to "deploy.prototxt".
model = caffe.io.caffe_pb2.NetParameter()
text_format.Merge(open(net_fn).read(), model)
model.force_backward = True
open('tmp.prototxt', 'w').write(str(model))
net = caffe.Classifier('tmp.prototxt', param_fn,
mean = np.float32([104.0, 116.0, 122.0]), # ImageNet mean, training set dependent
channel_swap = (2,1,0)) # the reference model has channels in BGR order instead of RGB
if gpu is None:
print("SHITTTTTTTTTTTTTT You're running CPU man =D")
else:
caffe.set_mode_gpu()
caffe.set_device(int(args.gpu))
print("GPU mode [device id: %s]" % args.gpu)
print("using GPU, but you'd still better make a cup of coffee")
if verbose == 3:
from IPython.display import clear_output, Image, display
print("display turned on")
frame = np.float32(PIL.Image.open(input + '/%08d.%s' % (frame_i, image_type) ))
if preview is not 0:
frame = np.float32(resizePicture(input + '/%08d.%s' % (frame_i, image_type), preview))
now = time.time()
if blend == 'loop':
blend_forward = True
blend_at = 0.4
blend_step = 0.1
for i in xrange(frame_i, nrframes):
print('Processing frame #{}').format(frame_i)
#Choosing Layer
if layers == 'customloop': #loop over layers as set in layersloop array
endparam = layersloop[frame_i % len(layersloop)]
else: #loop through layers one at a time until this specific layer
endparam = layers[frame_i % len(layers)]
#Look for Bezier curves
if len(octaves) == 6:
# print('octaves is 6')
# print(octaves)
# print(frame_i)
x = frame_i
print('x')
print(x)
P0x = 0
P0y = 0
P1x = 2675
# print(P1x)
P1y = octaves[1]
P0 = (P0x, P0y)
P3 = (P1x, P1y)
P1 = (octaves[2], octaves[3])
P2 = (octaves[5], octaves[4])
b_pts = [P0, P1, P2, P3]
print(b_pts)
xvals1, yvals1 = bezier_curve(b_pts, nTimes=P1x)
print('print(yvals[2675-x-1])')
print(yvals1[2675-x-1])
octaves_new = round_to(yvals1[(2675-x-1)], 1)
print('octn:')
print(octaves_new)
else: print('fail')
if len(iterations) == 6:
# print('iterations is 6')
# print(iterations)
x = frame_i
print('x')
print(x)
P0x = 0
P0y = 0
P1x = 2675
# print(P1x)
P1y = iterations[1]
# print(P1y)
P0 = (P0x, P0y)
P3 = (P1x, P1y)
P1 = (iterations[2], iterations[3])
P2 = (iterations[5], iterations[4])
b_pts = [P0, P1, P2, P3]
print(b_pts)
xvals, yvals = bezier_curve(b_pts, nTimes=P1x)
print('print(yvals[2675-x-1])')
print(yvals[2675-x-1])
# print(P2)
# print(P3)
iterations_new = round_to(yvals[(2675-x-1)], 1)
print('itrn:')
print(iterations_new)
else: print('fail')
#Choosing between normal dreaming, and guided dreaming
if guide_image is None:
frame = deepdream(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations_new, step_size = stepsize, octave_n = octaves_new, octave_scale = octave_scale, jitter=jitter, end = endparam)
else:
guide = np.float32(PIL.Image.open(input + '/resize/tr_%01d.%s' % (frame_i, image_type) ))
print('Setting up Guide with selected image')
guide_name = input + '/resize/tr_%01d.%s' % (frame_i, image_type)
print(guide_name)
guide_features = np.float32(PIL.Image.open(input + '/resize/tr_%01d.%s' % (frame_i, image_type) ), end=endparam)
frame = deepdream_guided(net, frame, image_type=image_type, verbose=verbose, iter_n = iterations_new, step_size = stepsize, octave_n = octaves_new, octave_scale = octave_scale, jitter=jitter, end = endparam, objective_fn=objective_guide, guide_features=guide_features,)
saveframe = output + "/%08d.%s" % (frame_i, image_type)
later = time.time()
difference = int(later - now)
# Stats (stolen + adapted from Samim: https://github.com/samim23/DeepDreamAnim/blob/master/dreamer.py)
print '***************************************'
print 'Saving Image As: ' + saveframe
print 'Frame ' + str(i) + ' of ' + str(nrframes-1)
print 'Frame Time: ' + str(difference) + 's'
timeleft = difference * (nrframes - frame_i)
m, s = divmod(timeleft, 60)
h, m = divmod(m, 60)
print 'Estimated Total Time Remaining: ' + str(timeleft) + 's (' + "%d:%02d:%02d" % (h, m, s) + ')'
print '***************************************'
PIL.Image.fromarray(np.uint8(frame)).save(saveframe)
newframe = input + "/%08d.%s" % (frame_i,image_type)
if blend == 0:
newimg = PIL.Image.open(newframe)
if preview is not 0:
newimg = resizePicture(newframe,preview)
frame = newimg
else:
if blend == 'random':
blendval=randint(5,10)/10.
elif blend == 'loop':
if blend_at > 1 - blend_step: blend_forward = False
elif blend_at <= 0.5: blend_forward = True
if blend_forward: blend_at += blend_step
else: blend_at -= blend_step
blendval = blend_at
else: blendval = float(blend)
frame = morphPicture(saveframe,newframe,blendval,preview)
frame = np.float32(frame)
now = time.time()
frame_i += 1
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Dreaming in videos.')
parser.add_argument(
'-i','--input',
help='Input directory where extracted frames are stored',
required=True)
parser.add_argument(
'-o','--output',
help='Output directory where processed frames are to be stored',
required=True)
parser.add_argument(
'-it','--image_type',
help='Specify whether jpg or png ',
required=True)
parser.add_argument(
"--gpu",
default= None,
help="Switch for gpu computation."
) #int can chose index of gpu, if there are multiple gpu's to chose from
parser.add_argument(
'-t', '--model_path',
dest='model_path',
default='../caffe/models/bvlc_googlenet/',
help='Model directory to use')
parser.add_argument(
'-m', '--model_name',
dest='model_name',
default='bvlc_googlenet.caffemodel',
help='Caffe Model name to use')
parser.add_argument(
'-p','--preview',
type=int,
required=False,
help='Preview image width. Default: 0')
parser.add_argument(
'-oct','--octaves',
nargs=6,
type=int,
required=False,
help='Octaves. Default: 4')
parser.add_argument(
'-octs','--octavescale',
type=float,
required=False,
help='Octave Scale. Default: 1.4',)
parser.add_argument(
'-itr','--iterations',
nargs=6,
type=int,
required=False,
help='Iterations. Default: 10')
parser.add_argument(
'-j','--jitter',
type=int,
required=False,
help='Jitter. Default: 32')
parser.add_argument(
'-z','--zoom',
type=int,
required=False,
help='Zoom in Amount. Default: 1')
parser.add_argument(
'-s','--stepsize',
type=float,
required=False,
help='Step Size. Default: 1.5')
parser.add_argument(
'-b','--blend',
type=str,
required=False,
help='Blend Amount. Default: "0.5" (constant), or "loop" (0.5-1.0), or "random"')
parser.add_argument(
'-l','--layers',
nargs="+",
type=str,
required=False,
help='Array of Layers to loop through. Default: [customloop] \
- or choose ie [inception_4c/output] for that single layer')
parser.add_argument(
'-v', '--verbose',
type=int,
required=False,
help="verbosity [0-3]")
parser.add_argument(
'-gi', '--guide_image',
required=False,
help="path to guide image")
parser.add_argument(
'-sf', '--start_frame',
type=int,
required=False,
help="starting frame nr")
parser.add_argument(
'-ef', '--end_frame',
type=int,
required=False,
help="end frame nr")
args = parser.parse_args()
if not args.model_path[-1] == '/':
args.model_path = args.model_path + '/'
if not os.path.exists(args.model_path):
print("Model directory not found")
print("Please set the model_path to a correct caffe model directory")
sys.exit(0)
model = os.path.join(args.model_path, args.model_name)
if not os.path.exists(model):
print("Model not found")
print("Please set the model_name to a correct caffe model")
print("or download one with ./caffe_dir/scripts/download_model_binary.py caffe_dir/models/bvlc_googlenet")
sys.exit(0)
main(args.input, args.output, args.image_type, args.gpu, args.model_path, args.model_name, args.preview, args.octaves, args.octavescale, args.iterations, args.jitter, args.zoom, args.stepsize, args.blend, args.layers, args.guide_image, args.start_frame, args.end_frame, args.verbose) new code, should be working, running first test. |
https://www.youtube.com/watch?v=Wt8mWiKO0UU sorry that is the best compression I can get. The source video was 15gb uncompressed. Sigh youtube. |
@jeremiahlamontagne nice video! |
https://www.reddit.com/r/deepdream/comments/3fvf70/halleys_fractal_final_cubic_bezier_tests/ Here is all the work I have done. It works well but is too sloppy and poorly implemented for mass consumption. At this point I am too busy to clean it up and change it around to make it easier and more streamlined for people. I also don't know if it offers enough of a benefit yet to justify that work. It may be better to animate octaves and iterations with a looping function like blend. I will post the latest version of my source in a bit. |
Oh also I forgot to mention, I came across this fork of the deepdream ipy notebook and it looks like it may have some useful features that would be fun to try in video. There may be more bang for the buck in merging those two projects:) |
excellent, it seems a continuation of the work by Kyle McDonald which ive also used in my own work: https://github.com/kylemcdonald/deepdream/blob/master/dream.ipynb |
First, thank you for this useful tool! Just a request to be able to have begining and ending parameters for the following arguments:
[-oct OCTAVES] [-octs OCTAVESCALE] [-itr ITERATIONS] [-j JITTER] [-z ZOOM] [-s STEPSIZE] [-b BLEND]
A linear transition would be excellent, a transition defined by a cubic bezier would be even better. Thank you for your consideration!
The text was updated successfully, but these errors were encountered: