# Posts tagged ‘Algorithms’

I’ve just started watching the Gilbert Strang MIT Courseware on iTunes, and highly recommend it – after watching it a light bulb went off in my head. Daniel Pook Kolb’s Blendshape system is very very impressive – but I did’nt and still dont understand it fully.

Thinking out loud here I thought that it might be essentially a form of finding out the ‘unknown’ in a linear combination of an nth dimensional space as the corrective of the combining shapes. For example the unknown of :

x[2,3] +  y [4,5]  = [6,7] would be [-1, 2] ( its hard to show as i’m not using LaTex atm)

Before any corrective is used the  ‘unknown’ is [0,0], this would be synomanous to  adding two blendshapes together without any corrective.

I tend to think in small chunks – I break down an idea, work out each part and then put it back together hopefully. I’m trying to use this approach with dynamics – I’m looking into a simple system to handle a variety of situations. Currently I’m thinking of simple spherical detection. This method use just a diameter from a point – its a simple system, but it might be scalable for more complexity.

Dynamics I find very hard to get to grips with, I have to take it very very slowly. Just understanding derivatives is hard, as its the function of the equation. Its also very fragile as a system – finite tweaks make big changes, especially in complex systems. My aim is to build simple systems that can be ‘bolted’ together right across the board from dynamics, to transformation stuff. Its sort of the middle man of rigging. I’m not the string or the parts of the puppet, im the knots that tie the string to the parts.

I’ve been looking into curves for quite awhile now, along with waves and dynamics eventually hoping to combine all three. Along with these ive been trying to understand the rules of rigging especially layer and hierachal rigging. A lot of riggers i know dont undestand the idea of ‘layers’ in a rig. In simple terms its like a layer in photoshop but in rigs it free up a lot of issues if you keep aspects of a rig to a layer – so for example your base skeleton could be your first layer, then basic setup then twist, then deformation. So its more like layered relationships – deformation is a good example. If we can modularize deformation in a simple system we can use it all over the place.

Major deformations like  skin simulation are outside of this, but twist, stretch, compression and bulge could be driven by one system. If we treat this system as a curve the issue arises is that its not uniform so control objects along it would bunch up so we need:

• A simplified curve, that possibly introduces horners rule (for speed)
• Uniformity across the curve (important if the tangent vectors are straight)
• The ability to overshoot the curve at both ends* (-0.5, 1.5)

*Why do we need this, well basically to allow for length between the points along the curve to be maintained, for example if we dont want the curve to compress the points along it need to overshoot the curve. This can be pretty simply acheived using a subdivision method. To keep a value at the same value i.e a length of 10 along the curve, all we do is divide this length by the curves length eg. 10/100. = 0.1 10/200 = 0.05. Problem comes in if the length of the curve is shorter than the defined length the ‘bucket’ inwhich t resides wouldnt exist. So you need to do some fiddling around. I’ll post some links accompanying this post.

I posted this at cgtalk

Has anyone though of using VCK with a transposed arc length method to get rotations about the elbow? Basically VCK ‘vector coupled ik’ is an ik chain driven by the vector magnitude to positioning the goal whilst having standard rotation for general fk, it basically allows you to break the ik and add nice arcs to the animation – the problem i see with it is you cant drive about the elbow. But what if we use a transposed arch length method, which would ontop drive the general rotation and the magnitude we could get rotation about the elbow- this could even have its own fk controller driving additively over the top i think -you need just a few variables such as bone length, as you drive it as an additive to the main control i.e a layer over the top.

I now must go to bed.

Probably the most important aspect of rigging, infact what we can sum rigging up is relativity – everything relies on. If its the mesh its relative to a skin, the to the bones and the bones to a rig. And even at the finite level the controls of the rig are relative to a other controls – they exist in a space of there own but are relative to something else even if this is the world.

Rigging is relativity and reference – its a bold statement but is the basis for everything needed. Everytime you parent or constrain an object to another you set its relativity and its reference. The key to rigging is a system where both dont fight but work hand in hand with one another. A good example is the spine – the animator wants control of the hip, chest and head. But also wants control of the torso (everything) – they also dont want counterotation and the ability to hold a pose.

Its a lot of systems but if we boil it down to relativity and reference its relatively (pardon the pun) straight forward. The hips are parented to the torso – so we have defined a refence: the torso and a relavity (torso-hip) to work in. The chest is parented to the torso, the same applies here. But the head is different the neck is really a part of the spine and really moves with the chest, but the problem comes in that we want it to move with the head when needed.

So we define 2 references – firstly we set the heads position relative to the chest, but its rotation to the torso. This means when we rotate the chest the head moves with it but crucial stays pointing at a target. But additional if we move the head the neck will follow – this is via an ik system or lookat/pole vector – simple stuff.

So when building a rig really understand whats relative to what, and understand the methods and math of space.

Is it possible to get the length of a curve without walking along it,  standard methods essentially split it into chunks and measure there total – the more chunks the better the accuracy. I’ll look into arc length and least square methods.

So currently im looking into a a scripting language to so I can develop my math stuff outside of the software and keep it a data driven as possible – currently its between JavaScript or Python. All I really care about is class definitions and handling functions.

I’m also looking into fourier synthesis, harmonic oscillations and and analog synthesis as a friend and I have a plan for an interesting tool! 🙂 Should be fun. Im also looking into oscillation and phase shift as a tool for animators – something along the lines of the ‘Wiggly Splines’ Pixars paper – I understand the basics of it, but it gets into territory dealing with basis function and new curve generalizations which in theory is possible but for max I’d either have to write sdk something else.

As to fourier synthesis, it’s really very very powerful as it allows paramaterisation of the wave form, such as quantization, rectify and full-rectifying with relative ease. Phase shift, frequency, amplitude and magnatude (terms of cosine and sine) can be handled in a deterministic fashion. Functions such as high-pass, reverb are a little more complex but if ineffect you controlling just the terms of the synthesis it might be easier.

High/low pass deal with frequency cut-off using two variables: the frequency and the attenuation: a range of q1-q10. I’m still a little confused as to how to affect the waveform as a process of the fourier – but I’m looking into it.

I’ve included what would be needed for a non-breakable system for the animators, it includes (in green) some 10 more additional targets needed. In total there’s some 30 targets for 6 base shapes. What key with a combination network system is:

• How many connections a corrective has.
• And how these are connected.

If we split the system down into essentially ‘stacks’ then it can become easier to understand, the first stack is an order of 1 -this is the base stack, then 2 then 3 and so on; for each corrective in its appropriate stack its has the same amount of inputs as the order of the stack. What becomes interesting is the connections, in most cases the last stack drives the next but we also have unique connections such as a 2 stack corrective (6 + 9)  driving a 5th stack corrective jumping 3 stacks.

Whats also important in this system is compiling the network when you change a corrective in a stack. On each stack all you need to be doing is generating a corrective and the inputs it requires – compiling this network would go through the connections a build the correctives. What the user see’s/modifies is the final corrective – which is infact a mathmatical ‘messed up’ target working behind the scenes. Whether this is realtime generated thing or a procedural ‘compiling ‘ step depends. Then later takes your ‘asthetic’ target, looks at its inputs and generates the real corrective.

The network is non-comunicative in terms of process – its a left to right system with all the corrective looking only at there inputs with a simple sorting method.

Ok so im looking into more curve math – yes I’m nuts, no im not going to show you. Most of it is over my head but im slowly understanding it, like NURB’s which I finally worked out the algorithm!! Which is kinda cool. But now for the life of me cant get my head around the recursive ‘N’ function of B-Splines i.e the Basis fn – Ni,p  So:

You have p =#(1,2,3,4,5) which are you control points and t which are basically a segment in this array infact either non-uniform or uniform eg. t=#(0,.25,.5,.75,1) Now what I dont get is Ni,p because basically if t[0,1] is within you segments eg. if t = .12 then its within segment 0,1 right and so it equals 1 and if not then 0 but its always 1 unless p[0] or p[(p.count)] when it equals 0. Its basically a step fn but its always one – i dunno I there must be more to it!

Ni,0(u) = if u[i] <= u <u[i]+1 then 1 else 0 – so if u (t) is within its segment it equals 1 else 0.

edit: i think i get it.

So I’m back in the UK spending christmas with family, but had some time to think about the relationship between datasets and n-space’s.  An n space of 2 is pretty easy to work out as the dataset existing in that space can either be (1), (2) or (1,2) and has a direct relationship to the vector existing in the space the datasets reside. What gets trick is when the dimensions get bigger:

For instance a 3n space consists of 3 weights and datasets existing at aproximately 7 places at positions of 1 and infinite inbetween positions like so: [1], [2], [3], [1,2], [1,3], [2,3], [1,2,3] What gets tricky is that the if a dataset exists at [1,2,3] (a value of 1.0 at each dimension) with a vector in this space: [.5, .5, 0]. How does the dataset know about the 0?

We’ll it should’nt – infact this is what im doing with the dataset, if a dataset is at [1,1,0] then really its at [1,2] with a value along each of these dimensions. So this is what theoretically what im doing with the nspace – essentially its dynamic. If I have 3 weights but only 2 are greater than 0, then I only use them in my nspace like so,

[1,0,1] = [1,3]

then i’ll do a cross-check against these. Dont know if this is correct but, im gettting there slowly.