Skip to content

Posts tagged ‘Matrix’

Maths 101: 3×3 Determinants

July 7, 2013

Charles

Back from a quick break and back to matrix math. As I showed with a 2×2 matrix we get the determinant of it by multiplying the first element of the first row by the last element of the last row; and subtracting this from the first element of the last row multiplied by the last element of the first row.

This if you will is our ‘atomic’ function that we’ll always use to find the determinant of a square matrix. Crucially when we deal with matrices larger than 2×2, we recursively break the matrix down into smaller matrices until we can pass our atomic function to each piece. Finally we use a similar approach as when we were finding the cross product using successive +, -, +.. etc. Lets get started. Given a 3×3 matrix:

1 5 3
2 4 7
4 6 2

First, we use the first row to break the matrix into 3 successive 2×2 matrices – lets create a 2×2 matrix using the first element:

1 5 3
2 4 7
4 6 2

It’s determinant -34 = (4*2) – (6*7). The second matrix for the second element of the first row:

1 5 3
2 4 7
4 6 2

It’s determinant -24 = (2*2) – (4*7). And finally the third matrix for last element of the first  row:

1 5 3
2 4 7
4 6 2

It’s determinant -4 = (2*6) – (4 * 4). Back to our initial row 1 5 3, we multiple each element by its determinant we found above to give us:

(1 * -34) = -34
(5 * -24) = -120
(3 * -4) = -12

Finally using our sign rule (+ – + – …) we add/subtract the parts: -34 – -120 + -12 giving us 74, the determinant of the 3×3 matrix.

Maths 101: Determinants

June 9, 2013

Charles

Ok so we’ve cover matrix transpose and multiplication, we’re now going to get into determinants. I’ll spread this into multiple posts as we’ll be eventually dealing with recursive functions. Determinants are a crucial glue in matrix math which allow you to find the inverse, which is akin to the reciprocal.

With the a 2 x 2 matrix, the determinant is single function – once we deal 3 x 3 and greater sized matrices we essentially recursively break them down to 2 x 2 and pass the base function. For a 2 x 2 matrix:

A B
C D

All we need to do is multiply the first element of the first row by the last element of the last row (A*D), and first element of the last row by the last element of the first row (C * B). Then take these away from each other (AD) – (CD) to give us our determinant:

1 2
4 1

(1*1) = 1
(4*2) = 8
(1-8) = -7
= -7

We’ll dig into 3 x 3 matrices next and then onto recursive methods…

Maths 101: Back to Basics – Cross Product

May 20, 2013

Charles

So the cross product along with the dot product is the bread and butter of vector math – but I’d never known really whats happening internally. Essentially the cross product of two vectors (two directions), creates a new vector thats orthogonal to them. This means its a vector thats 90 degrees perpendicular to product of the other two.

So if we have two vectors [1,0,0] and [0,1,0], the cross product will be [0,0,1] – Likewise if we swap these vectors the cross product will be [0, 0, -1]. Even though we doing multiplication internally, we’re also doing subraction and vector order matters here. We can use sarrus’ rule, to get the cross product which is like finding the determinant – which i’ll discuss in matrix math:

If we have two vectors, [1, 0, 0] and [0, 1, 0] we can put them into a 3 by 3 matrix, with an imaginary row at the top:

I J K
1 0 0
0 1 0

We’ll get the determinant of each part of the imaginary row. Starting with the I, we’ll disregard any row and column it crosses and keep the rest. So for I it’ll become:

0 0
1 0

Next we’ll multiply the first value of the first row, by the last value of the last row – in this case 0, 0 and subtract it from first value of the last multiplied by the last value of the first row:

I = 1 * ((0 * 0) – (1 * 0)) = 0

This is the first part of a new vector – Lets see how this looks for the whole thing: cross ([1, 0, 0], [0,1,0]) =

I = 1 * ((0 * 0) – (1 * 0)) = 0
J = 1 * ((1 * 0) – (0 * 0)) = 0
K = 1 * ((1 * 1) – ( 0 * 0) = 1

We can see that the last part makes a difference, we’re doing (1 *1) – (0 * 0), so 1 – 0. If we’d have swapped the initial vectors around we’d have (0 * 0) – (1 * 1) = -1. Next up we’ll break into matrices..

Maths 101: Back to basics

May 20, 2013

Charles

A lot of maths I use tends to be abstracted away either in libraries I use, or inside the application. I’m going to go back to basics starting with vector maths, and moving onto matrices – these, in my opinion are the back bone to doing what we do. I’ll cover from the ground up and then go into some more complex areas: determinants, inverse multiplication, decomposition etc. I’ll be learning a bunch of this stuff along the way. Lets get started:

Vectors

So a vector is basically a direction from the origin. [1, 2, 3] basically means we have a point thats moved 1 in the X direction, 2 in the Y and 3 the Z direction.

Vectors can be added together simply by adding the parts of each together. [1, 2, 3] + [4, 5, 6] = [(1+4), (2+5), (3+6)]. Subtraction follows a similar process.

Vectors can be multiplied against a scalar (float) value by multiplying each part by it: [1, 2, 3] * 5 = [(1*5), (2*5), (3*5)].

We can get the length of a vector by firstly, powering each part by 2, then summing (adding up) these parts, and finally getting the square root of the total. This looks like this len([1, 2, 3]) = sqrt((1^2) + (2^2) + (3^2)).

Using this length we can get the normal of the vector. Normalizing a vector keeps its direction, but its length becomes 1.0. This is important in finding angles, unit vectors and matrix scale. To do this we first get the vectors length, and then divide each part of the vector by it:

normal([1, 2, 3]) =

length = sqrt((1^2) + (2^2) + (3^2))

normal = [1/length, 2/length, 3/length]

The dot product of two 3d vectors (x, y, z), basically returns the magnitude of one vector projected onto another. If we have two vectors [1, 0, 0] and [1, 1, 0]; when we project the latter onto the former, the value along the formers length is the dot product. To get the dot product of two vectors we simply multiply the parts together:

[1, 2, 3] . [4, 5, 6] = (1*4) + (2 * 5) + (3 * 6)

We can use the dot product to get the angle between two vectors too. If we first normalize each vector, we can get the angle by getting the inverse cos (or acos) of this dot. This will return a radian, so we can convert it into degrees by multiplying it by (180 / pi):

cos(norm([1,2, 3] . norm([4, 5, 6]) )^-1 * (180/pi)

Next cross products..

Multiplying matrices in MotionBuilder?

June 1, 2012

Charles

For the life of me I can’t figure this out – If I have a null and a cube at the origin, with the cube rotated and the null placed at [0,20,0] – Setting the nulls transform by multiplying it by that of the cubes should put it into the coordinate space of the cubes. (Essentially orbiting it about the cube)

This doesn’t appear to be the case though, as it appears to be doing the transformation in-place I.e. doing the transform with the first 3 rows and then adding the position part 4th row.

from pyfbsdk import *
box = FBFindModelByName('Cube')
null = FBFindModelByName('Null')

k = FBMatrix()
box.GetMatrix(k)

j = FBMatrix()
null.GetMatrix(j)

m = FBMatrix(j * k)
null.SetMatrix(m)

EDIT:

Just found out the FBMatrixMult – seems to do the job.

Is orthogonality found in nature?

October 12, 2009

Charles

I wonder whether orthogonality is found in nature? We can dictate a direction with an axis and its spin about that axis with a second axis – the third axis is really a product for keeping the second orthogonal to the first.  I wonder whether this is needed in nature or whether it copes quite well with skew? Skew itself seems quite common in nature and the very fact that skin, muscles, etc skew is important for flexibility, movement and kinesis.

I wonder if allowing a transform to skew, without affect its scale has benefits to rigging. Tensegrity and its biological form certainly allow for skew against multiple planes, and the fact that they work under tension allows them to always find a resolution.

A really, really indispensable PDF on BVH and motion capture formats.

September 16, 2009

Charles

I found this PDF on motion capture formats – funnily enough its called “Motion Capture File Formats Explained” and it is really essential if you’re trying to figure out why everything appears to work but doesn’t.

Motion Capture Formats Explained

Read on from page 16 if you like me, having built a correct matrix from the global data and offset find out that inaccuracies get past done the chain because of discrepancies in this very global matrix (due to the communicative problems of matrices). I will add this to my research page, and possibly keep a copy on my server for backup.

Matrices: Rules I try to remember, but tend to forget!

June 2, 2009

Charles

1. In 3d the product of matrix multiplication is always in world space.
2. An objects transform is a product of its target space multiplied by its reference space.

This second part is really vital to understand, (I dont think I’ve phased it myself well enough here even) But basically it means that if your transforming an object by the world, but the object is parented the difference between the object and its target space need to be multiplied by the objects local space.

I’ll try to describe this more with pics.

Matrices: Reference Coordinate System

May 30, 2009

Charles

All object transforms have a reference coordinate system i.e. the space they exist in even if there parented to an object or not. For example if I’m driving a vehicle, I’m relative to the vehicle which in turn is relative to the earth, and in turn the sun. 

When we want to find the difference or offset of an objects transform relative to its space – be it’s parent, world or an other object we use whats called the inverse. Now this is where it gets a little tricky so ill go slow – to transform an object by another object or get it transform relative to that object we use matrix multiplication. Matrix multiplication in laymen’s terms is basically addition like 10 + 10, but most importantly is non-communicative. 

This means simply that  if one matrix is 20 and another matrix is 30, 20 + 30 will equal 50 but 30 + 20 wont. Or simply put its like subtraction. This is due to how matrices are multiplied together.

If we got back to our start matrix – [1,0,0] [0,1,0] [0,0,1] [0,0,0] we can classify each vector as a ‘row’ i.e.

[a,b,c] – row 1

[d,e,f] – row 2

[h,i, j] – row3

[k,l,m] – row 4

With perpendicular values such as a,d,h,k being ‘columns’. In our example above we have 3 columns by 4 rows or a 3×4 matrix. The crucial rule you have to keep in mind when multiplying matrices is that the initial matrix must have the same amount columns as the matrix your multiplying it against has rows. For example if our initial matrix looks like this:

[1,2,3]

[4,5,6]

Our multiplying matrix must have the same amount of rows like so,

[a,d] 

[b,e] 

[c,f]

We multiply a matrix like so: 1 x a, 2  x  b, 3  x c and so on and so forth…

When we get the relative transform of one object to another, we multiply its transform by the inverse of our target object, parent or space. Now this is a quite a bit more complex so i’ll discuss it very simply.

If we treat two matrices as single values for example 10 and 20, when we get the relative space of 10 to 20 what we do is 10 + -20. Which gives us -10; in other words we’re finding the difference we need to go our base objects transform from our target object, parent or space. Were getting the transform ‘offset’ we need to apply to our target object to get our base objects transform. This offset is always in world space – because it’s the difference thats needed.

More on matrices: basics

May 30, 2009

Charles

Most object transforms in 3d software are matrices’ heres a rough breakdown of what they are.

A matrix in 3d is an axis defined by three vectors: X,Y and Z and fourth being it’s positional offset from an origin. The length of each axis from the origin defines the scale of that axis; 1.0 being 100%. The ‘identity’ matrix is an objects base transform e.g.

matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0] – for the X, Y, Z axis’ and the positional offset from the origin.

So for instance if we wanted to scale an object by 200% along its X axis our matrix transform would look like this – matrix3 [2,0,0] [0,1,0] [0,0,1] [0,0,0]

Notice also that each axis is perpendicular to each other axis (90 degrees) – this is important is if it wasnt we would get skewing. Now each axis’ can point in any direction as long as the other two are perpendicular to it.