Many of the underlying mathematical operations used in the 3D rendering process extend effortlessly to four dimensions. The rotation and cross-product operations, however, do not extend easily or intuitively; these are presented here before continuing with the rest of this paper.
For the most part, vector operations in four space are simple extensions of their three-space counterparts. For example, computing the addition of two four-vectors is a matter of forming a resultant vector whose components are the sum of the pairwise coordinates of the two operand vectors. In the same fashion, subtraction, scaling, and dot-products are all simple extensions of their more common three-vector counterparts.
In addition, operations between four-space points and vectors are also simple extensions of the more common three-space points and vectors. For example, computing the four-vector difference of four-space points is a simple matter of subtracting pairwise coordinates of the two points to yield the four coordinates of the resulting four-vector.
For completeness, the equations of the more common four-space vector operations follow. In these equations, U = <U0, U1, U2, U3> and V = <V0, V1, V2, V3> are two source four-vectors and k is a scalar value:
U + V = <U0 + V0, U1 + V1, U2 + V2, U3 + V3> U - V = <U0 - V0, U1 - V1, U2 - V2, U3 - V3> kV = <kU0, kU1, kU2, kU3> U·V = U0V0 + U1V1 + U2V2 + U3V3
The main vector operation that does not extend trivially to four-space is the cross product. A three-dimensional space is spanned by three basis vectors, so the cross-product in three-space computes an orthogonal three-vector from two linearly independent three-vectors. Hence, the three-space cross product is a binary operation.
In N-space, the resulting vector must be orthogonal to the remaining N-1 basis vectors. Since a four-dimensional space requires four basis vectors, the four-space cross product requires three linearly independent four-vectors to determine the remaining orthogonal vector. Hence, the four-space cross product is a trinary operation; it requires three operand vectors and yields a single resultant vector. In the remainder of this paper, the four-dimensional cross product will be represented in the form X4(U,V,W).
To find the equation of the four-dimensional cross product, we must first establish criteria of the cross product. These are as follows:
It turns out that a somewhat simple-minded approach to computing the four-dimensional cross product is the correct one. To motivate this idea, we first consider the three-dimensional cross product. The 3D cross product can be thought of as the determinant of a 3x3 matrix whose entries are as follows:
+- -+ | i j k | X3(U,V) = | U0 U1 U2 | | V0 V1 V2 | +- -+
where U and V are the operand vectors, and i, j & k represent the unit components of the resultant vector. The determinant of this matrix is
i (U1V2 - U2V1) - j (U0V2 - U2V0) + k (U0V1 - U1V0)
which is the three-dimensional cross product. Using this idea, we'll form the analogous 4x4 matrix, and see if it meets the four cross product properties listed above:
[2.1a] +- -+ | i j k l | X4(U,V,W) = | U0 U1 U2 U3 | | V0 V1 V2 V3 | | W0 W1 W2 W3 | +- -+
The determinant of this matrix is
[2.1b] |U1 U2 U3| |U0 U2 U3| |U0 U1 U3| |U0 U1 U2| i|V1 V2 V3| - j|V0 V2 V3| + k|V0 V1 V3| - l|V0 V1 V2| |W1 W2 W3| |W0 W2 W3| |W0 W1 W3| |W0 W1 W2|
If the operand vectors are linearly dependent, then the vector rows of the 4x4 matrix will be linearly dependent, and the determinant of this matrix will be zero. This satisfies the first condition. The third condition is also satisfied, since a scalar multiple of one of the vectors yields a scalar multiple of one of the rows of the 4x4 matrix. This results in a determinant that is scaled by that factor, so condition three is also met.
The fourth condition falls out as a property of determinants, i.e. when two rows of a determinant matrix are interchanged, only the sign of the determinant changes. Hence, the fourth condition is also met.
The second condition is proven by calculating the dot product of the resultant vector with each of the operand vectors. These dot products will be zero if and only if the resultant vector is orthogonal to each of the operand vectors.
The dot product of the resultant vector X4(U,V,W) with the operand vector U is the following (refer to equation [2.1b]):
U·X4(U,V,W) = |U1 U2 U3| |U0 U2 U3| |U0 U1 U3| |U0 U1 U2| U0|V1 V2 V3| - U1|V0 V2 V3| + U2|V0 V1 V3| - U3|V0 V1 V2| |W1 W2 W3| |W0 W2 W3| |W0 W1 W3| |W0 W1 W2|
This dot product can be rewritten as the determinant
| U0 U1 U2 U3 | | U0 U1 U2 U3 | | V0 V1 V2 V3 | | W0 W1 W2 W3 |,
which is zero, since the first two rows are identical. Hence, the resultant vector X4(U,V,W) is orthogonal to the operand vector U. In the same way, the dot products of V·X4(U,V,W) and W·X4(U,V,W) are given by the determinants
| V0 V1 V2 V3 | | W0 W1 W2 W3 | | U0 U1 U2 U3 | | U0 U1 U2 U3 | | V0 V1 V2 V3 | and | V0 V1 V2 V3 | | W0 W1 W2 W3 | | W0 W1 W2 W3 |,
which are each zero.
Therefore, the second condition is also met, and equation [2.1a] meets all four of the criteria for the four-dimensional cross product.
Since the calculation of the four-dimensional cross product involves 2x2 determinants that are used more than once, it is best to store these values rather than re-calculate them. The following algorithm uses this idea.
// Cross4 computes the four-dimensional cross product of the three vectors // U, V and W, in that order. It returns the resulting four-vector. Vector4 *Cross4 (Vector4 *result, Vector4 U, Vector4 V, Vector4 W) { double A, B, C, D, E, F; // Intermediate Values // Calculate intermediate values. A = (V[0] * W[1]) - (V[1] * W[0]); B = (V[0] * W[2]) - (V[2] * W[0]); C = (V[0] * W[3]) - (V[3] * W[0]); D = (V[1] * W[2]) - (V[2] * W[1]); E = (V[1] * W[3]) - (V[3] * W[1]); F = (V[2] * W[3]) - (V[3] * W[2]); // Calculate the result-vector components. *result[0] = (U[1] * F) - (U[2] * E) + (U[3] * D); *result[1] = - (U[0] * F) + (U[2] * C) - (U[3] * B); *result[2] = (U[0] * E) - (U[1] * C) + (U[3] * A); *result[3] = - (U[0] * D) + (U[1] * B) - (U[2] * A); return result; }
Rotation in four space is initially difficult to conceive because the first impulse is to try to rotate about an axis in four space. Rotation about an axis is an idea fostered by our experience in three space, but it is only coincidence that any rotation in three-space can be determined by an axis in three-space.
For example, consider the idea of rotation in two space. The axis that we rotate ``about'' is perpendicular to this space; it isn't even contained in the two space. In addition, given an origin of rotation and a destination point in three space, the set of all rotated points for a given rotation matrix lie in a single plane, just like the case for two space.
Rotations in three-space are more properly thought of not as rotations about an axis, but as rotations parallel to a 2D plane. This way of thinking about rotations is consistent with both two space (where there is only one such plane) and three space (where each rotation ``axis'' defines the rotation plane by coinciding with the normal vector to that plane).
Once this idea is established, it is easy to construct the basis 4D rotation matrices, since only two coordinates will change for a given rotation. There are six 4D basis rotation matrices, corresponding to the XY, YZ, ZX, XW, YW and ZW planes. These are given by (using angle T):
+- -+ +- -+ +- -+ | cosT sinT 0 0 | | 1 0 0 0 | | cosT 0 -sinT 0 | |-sinT cosT 0 0 | | 0 cosT sinT 0 | | 0 1 0 0 | | 0 0 1 0 | | 0 -sinT cosT 0 | | sinT 0 cosT 0 | | 0 0 0 1 | | 0 0 0 1 | | 0 0 0 1 | +- -+ +- -+ +- -+ XY Plane YZ Plane ZX Plane +- -+ +- -+ +- -+ | cosT 0 0 sinT | | 1 0 0 0 | | 1 0 0 0 | | 0 1 0 0 | | 0 cosT 0 -sinT | | 0 1 0 0 | | 0 0 1 0 | | 0 0 1 0 | | 0 0 cosT -sinT | |-sinT 0 0 cosT | | 0 sinT 0 cosT | | 0 0 sinT cosT | +- -+ +- -+ +- -+ XW Plane YW Plane ZW Plane
Previous Chapter | Table of Contents | Next Chapter |