July 7, 2013
Back from a quick break and back to matrix math. As I showed with a 2×2 matrix we get the determinant of it by multiplying the first element of the first row by the last element of the last row; and subtracting this from the first element of the last row multiplied by the last element of the first row.
This if you will is our ‘atomic’ function that we’ll always use to find the determinant of a square matrix. Crucially when we deal with matrices larger than 2×2, we recursively break the matrix down into smaller matrices until we can pass our atomic function to each piece. Finally we use a similar approach as when we were finding the cross product using successive +, -, +.. etc. Lets get started. Given a 3×3 matrix:
1 5 3
2 4 7
4 6 2
First, we use the first row to break the matrix into 3 successive 2×2 matrices – lets create a 2×2 matrix using the first element:
1 5 3
2 4 7
4 6 2
It’s determinant -34 = (4*2) – (6*7). The second matrix for the second element of the first row:
1 5 3
It’s determinant -24 = (2*2) – (4*7). And finally the third matrix for last element of the first row:
1 5 3
It’s determinant -4 = (2*6) – (4 * 4). Back to our initial row 1 5 3, we multiple each element by its determinant we found above to give us:
(1 * -34) = -34
(5 * -24) = -120
(3 * -4) = -12
Finally using our sign rule (+ – + – …) we add/subtract the parts: -34 – -120 + -12 giving us 74, the determinant of the 3×3 matrix.
June 9, 2013
Ok so we’ve cover matrix transpose and multiplication, we’re now going to get into determinants. I’ll spread this into multiple posts as we’ll be eventually dealing with recursive functions. Determinants are a crucial glue in matrix math which allow you to find the inverse, which is akin to the reciprocal.
With the a 2 x 2 matrix, the determinant is single function – once we deal 3 x 3 and greater sized matrices we essentially recursively break them down to 2 x 2 and pass the base function. For a 2 x 2 matrix:
All we need to do is multiply the first element of the first row by the last element of the last row (A*D), and first element of the last row by the last element of the first row (C * B). Then take these away from each other (AD) – (CD) to give us our determinant:
(1*1) = 1
(4*2) = 8
(1-8) = -7
We’ll dig into 3 x 3 matrices next and then onto recursive methods…
May 25, 2013
I’ve started so i’ll finish – we’re going to dig into matrix multiplication; once I’ve covered this we’ll dive into determinants and back to matrix inversion. Theres a reason for this which we’ll understand along the way – i’ll throw in the identity matrix and mean which are great for validation and dimensional understanding. Onto matrix multiplication then:
With matrix multiplication, we’re actually on face value doing sort of addition but internally multiplication – your’ll see this more when working with the inverse. Crucially there are two things that matter in matrix multiplication: firstly the multiplication rule, which is dependent of the second thing – A matrix can ONLY be multiplied by another matrix if it has the same amount of columns as the other has rows. Given two matrices:
A B C
D E F
We start by taking the first value of the first row of first matrix! (A) and multipling it by the same element of the second matrix, like so (A * X). Now instead of following this pattern E.g. (B * I), we move down the column of the second matrix, so (B * Y). We can see now why the second matrix needs the same amount of rows as the first has columns. Colour coding this with the first row of the first matrix and with the first column of the second we can see the rule in action:
A B C
D E F
We take the sum of these multiplications and treat it as the first value of the first row of the new matrix:
(AX) + (BY) + ( CZ)
The entire matrix looking like so:
(AX) + (BY) + (CZ) , (AI) + (BK) + (CJ)
(DX) + (EY) + (FZ) , (DI) + (EK) + (FJ)
Crucially we move along the first matrices rows as we do with the second matrices columns. Pseudo code could look something like this (might be off):
For i in matrix B’s column length:
For j in matrix B’s row length:
Sum (Multiply A[i][j] by B[j][i])
Now if you wanted to multiply a matrix by itself that wasn’t square i.e. it didn’t have the same amount of columns as rows; this would be where you’d create a transposed version of the matrix and multiply it by that E.g. To multiply:
A B C
D E F
by itself, you’d create a new matrix transposing the original like so:
And multiply these two using the rule stated above:
(AA) + (BB) + (CC) , (AD) + (BC) + (CF)
(DA) + (EB) + (FC) , (DD) + (EE) + (FF)
Next we’ll break into determinants; which we’ll try to formulate recursive a function for any n x m square matrix.
May 21, 2013
I’m going to cover broad topics in vectors, matrices and the likes, so i’t may appear that i’m skipping specific’s at time. But hopefully I’ll give a good enough understanding and hopefully be able to add specifics in due course. So onto matrices:
A matrix is an object made up of columns and rows – by association an n x m is row by column matrix of n rows and m columns like so (the …n just denotes the list of values):
M C O L U M N S
Another way to look at it is that each column represents a dimension, if we have say a 3 x 3 matrix we can represent an objects orientation, with each row representing an axis:
X0 Y0 Z0
X1 Y1 Z1
X2 Y2 Z2
In this example each row is a vector (direction) with an x, y and z component and by this three dimensional. This vector is also known as a ‘row vector’, similarly if we got the first components of each row we can call that a ‘column vector. Matrices have no know bounds - theres no limit to them which makes them important in n-dimensional workflows (stuff that i dig).
There are 4 main big functions of matrices – Transpose, multiplication, determinant and inverse. These 4 are the grease that allows powerful manipulation of matrices. We’ll throw mean (average) in there two because its important in data analysis.
We’ll start with something relatively simple but that’ll make a difference when we get to multiplication. All that transpose does is swap rows for columns and vice versa. So a matrix that looked like this:
1 2 3
4 5 6
Doing this in pseudo code we can do something like this:
for i in column:
for j in rows:
So we’ve transposed the matrix, why is this important – we’ll to multiply a matrix with another matrix it needs to have the same amount of columns as the other has rows! We’ll discuss square and identity matrices next…
May 20, 2013
So the cross product along with the dot product is the bread and butter of vector math – but I’d never known really whats happening internally. Essentially the cross product of two vectors (two directions), creates a new vector thats orthogonal to them. This means its a vector thats 90 degrees perpendicular to product of the other two.
So if we have two vectors [1,0,0] and [0,1,0], the cross product will be [0,0,1] – Likewise if we swap these vectors the cross product will be [0, 0, -1]. Even though we doing multiplication internally, we’re also doing subraction and vector order matters here. We can use sarrus’ rule, to get the cross product which is like finding the determinant – which i’ll discuss in matrix math:
If we have two vectors, [1, 0, 0] and [0, 1, 0] we can put them into a 3 by 3 matrix, with an imaginary row at the top:
I J K
1 0 0
0 1 0
We’ll get the determinant of each part of the imaginary row. Starting with the I, we’ll disregard any row and column it crosses and keep the rest. So for I it’ll become:
Next we’ll multiply the first value of the first row, by the last value of the last row – in this case 0, 0 and subtract it from first value of the last multiplied by the last value of the first row:
I = 1 * ((0 * 0) – (1 * 0)) = 0
This is the first part of a new vector – Lets see how this looks for the whole thing: cross ([1, 0, 0], [0,1,0]) =
I = 1 * ((0 * 0) – (1 * 0)) = 0
J = 1 * ((1 * 0) – (0 * 0)) = 0
K = 1 * ((1 * 1) – ( 0 * 0) = 1
We can see that the last part makes a difference, we’re doing (1 *1) – (0 * 0), so 1 – 0. If we’d have swapped the initial vectors around we’d have (0 * 0) – (1 * 1) = -1. Next up we’ll break into matrices..
May 20, 2013
A lot of maths I use tends to be abstracted away either in libraries I use, or inside the application. I’m going to go back to basics starting with vector maths, and moving onto matrices – these, in my opinion are the back bone to doing what we do. I’ll cover from the ground up and then go into some more complex areas: determinants, inverse multiplication, decomposition etc. I’ll be learning a bunch of this stuff along the way. Lets get started:
So a vector is basically a direction from the origin. [1, 2, 3] basically means we have a point thats moved 1 in the X direction, 2 in the Y and 3 the Z direction.
Vectors can be added together simply by adding the parts of each together. [1, 2, 3] + [4, 5, 6] = [(1+4), (2+5), (3+6)]. Subtraction follows a similar process.
Vectors can be multiplied against a scalar (float) value by multiplying each part by it: [1, 2, 3] * 5 = [(1*5), (2*5), (3*5)].
We can get the length of a vector by firstly, powering each part by 2, then summing (adding up) these parts, and finally getting the square root of the total. This looks like this len([1, 2, 3]) = sqrt((1^2) + (2^2) + (3^2)).
Using this length we can get the normal of the vector. Normalizing a vector keeps its direction, but its length becomes 1.0. This is important in finding angles, unit vectors and matrix scale. To do this we first get the vectors length, and then divide each part of the vector by it:
normal([1, 2, 3]) =
length = sqrt((1^2) + (2^2) + (3^2))
normal = [1/length, 2/length, 3/length]
The dot product of two 3d vectors (x, y, z), basically returns the magnitude of one vector projected onto another. If we have two vectors [1, 0, 0] and [1, 1, 0]; when we project the latter onto the former, the value along the formers length is the dot product. To get the dot product of two vectors we simply multiply the parts together:
[1, 2, 3] . [4, 5, 6] = (1*4) + (2 * 5) + (3 * 6)
We can use the dot product to get the angle between two vectors too. If we first normalize each vector, we can get the angle by getting the inverse cos (or acos) of this dot. This will return a radian, so we can convert it into degrees by multiplying it by (180 / pi):
cos(norm([1,2, 3] . norm([4, 5, 6]) )^-1 * (180/pi)
Next cross products..
December 16, 2012
I really like Disney’s approach to rigging - with their latests paper instead of building modular units such as ‘arm’ or ‘hand’, they’ve instead devised a method to make the coding itself humanely readable and easy to understand.
Variables & Operators
I’ve been thinking about this myself and essentially it breaks down into two things 1) you need to be able to assign variables and 2) pass operators.
If we take a simple class based function:
create.box(length=1.0, height=1.0, width=1.0)
We have have some base fields – the class, its method, and some arguments passed as assigned types. So what does a field need if it was humanly readable?
- Firstly it would need a value for the fields name.
- It would need to know if it can pass multiple values.
- It would need to know if it passes values or fields.
- It would need some sort of rule on its syntax and definition.
So for the above method we’d have something like this:
CLS create DEF box ARG 3 FLOAT length 1.0 FLOAT height 1.0 FLOAT length 1.0
So pretty straight forward – each of these CLS, DEF etc is a field type, with its own rules that govern what the next input should be. An example with a list field could be as follows:
LIST none 2 FLOAT none 1.0 VECTOR none [1,0,1]
So this is a list field type, which has no assignment (thats why its none), has 2 for the next input because it allows for multiple inputs. The reason why the next input is FLOAT is because crucially, fields need be able to pass other fields as input. So the code would compile to something like this:
The ‘#()’, is a syntax definition for the field, along with the ‘,’ – values/fields passed to a field could and probably should have a syntax definition too.
With something like this we could code a framework pretty easily, and because these are just one lines they are pretty easily modded. Here’s an example of say an ik chain:
CLS ik_chain create METHOD none chain ARGS none 3 VECTOR none [0,0,0] VECTOR none [10, 0, 10] VECTOR none [0,0,20]
This would evaluate to something like this:
ik_chain=create.chain([0,0,0], [10,0,10], [0,0,20])
What i don’t like about this is assignment is using none when its not really needed – which i dont like. Crucially i think the string you’d pass should only have what you need. E.g.
CLS ik_chain METHOD chain ARGS 3 VECTOR [0,0,0] VECTOR [10,0,10] VECTOR [0,0,20]
And even the field types themselves could go possibly?
ik_chain create chain 3 [0,0,0] [10,0,10] [0,0,20]
But now we have a problem – by stripping field types etc.. we start to lean towards a structured string. 3 for example can’t be placed anywhere – in fact what does it mean without some association?
So defining fields allows for customization, but we have to be careful to only give enough information for the data to be compiled. My hiccup is that anything can be assigned e.g.
So do we type ‘FLOAT var 1.0′ for it to be assigned and ‘FLOAT none 1.0′ – for it not to be? It’s seems the none is just extraneous info we don’t really need. Could value that gets passed to the rule allow for assignment? e.g.
FLOAT ‘var=1.0′ – we’d have to split this string based on the ‘=’ and then pass the second half to the rule.
CLS ‘ik_chain=create’ METHOD ‘chain’ ARGS 3 [0,0,0] [10,0,10] [0,0,20]
December 8, 2012
So as I previously mentioned, multi-directional constraints can be achieved by invalidating the transforms. What I did’nt mention is that its pretty damn hard mixing up a group constraint mechanic with a non-group mechanic.
I’m resolving myself to have either one or the other, i.e a single object that can switch to multiple parents, or a group of objects that can share/switch to a single parent. With the group method I need to tweak some things and wrap the switching into a simple function. The structure basically looks/acts like this.
We have a switch attribute, that basically stores its targets, and index of target to switch to. The last target is deemed the world or invalidation target (unless i expose it for change). The reason for this if we have two objects A and B, with both in each others targets we need to invalidate one when it tries to switch to itself. We also need to switch that first E.g.
- A, B both have each others targets, both set to world.
- B switches to A, A is still set to world – the link is fine.
- A switches to B – the causes circular dependency.
So to fix this we walk the controllers and determine which will be invalidate, in this case B trying to connect to B will invalidate itself automatically to the world, allowing A to happily connect to it.
Why is this nice, well the core doesn’t change at all. All we need to build is a check for invalidation based on the index, invalidate and switch in the correct order.
Essentially a switch-box for switches.
September 8, 2012
I’ll be on sabbatical for the next 6 weeks – it’s going to be great as it’ll be the first time i’ve truly taken a long break from working in the past 11 years! I’ll have time to rest and reset.
I should have some time now to post more – I’ve been deep in the computational maths world for the past 5 weeks or so, turning my brain into a bit of a math soup! which will definitely spill out here bar any NDA specific stuff.
June 1, 2012
For the life of me I can’t figure this out – If I have a null and a cube at the origin, with the cube rotated and the null placed at [0,20,0] – Setting the nulls transform by multiplying it by that of the cubes should put it into the coordinate space of the cubes. (Essentially orbiting it about the cube)
This doesn’t appear to be the case though, as it appears to be doing the transformation in-place I.e. doing the transform with the first 3 rows and then adding the position part 4th row.
from pyfbsdk import *
box = FBFindModelByName('Cube')
null = FBFindModelByName('Null')
k = FBMatrix()
j = FBMatrix()
m = FBMatrix(j * k)
Just found out the FBMatrixMult - seems to do the job.