Reputation: 58053
In SCNMatrix4
struct, the last m44
element is used for Homogeneous Coordinates.
I'd like to know what are m14
, m24
and m34
elements in SCNMatrix4? And what exactly I can use these three elements for?
init(m11: Float, m12: Float, m13: Float, m14: Float,
m21: Float, m22: Float, m23: Float, m24: Float,
m31: Float, m32: Float, m33: Float, m34: Float,
m41: Float, m42: Float, m43: Float, m44: Float)
Upvotes: 3
Views: 1017
Reputation: 111
The bottom row is used for translations, and for projections, such as perspective, or a camera viewing frustum. Although not Apple code, the following source (Licensed under the MIT license), taken from https://referencesource.microsoft.com/#System.Numerics/System/Numerics/Matrix4x4.cs,78a107de58a17946 (starting from line 864), shows "[how to create] a perspective projection matrix based on a field of view, aspect ratio, and near and far view plane distances."
/// <summary>
/// Creates a perspective projection matrix based on a field of view, aspect ratio, and near and far view plane distances.
/// </summary>
/// <param name="fieldOfView">Field of view in the y direction, in radians.</param>
/// <param name="aspectRatio">Aspect ratio, defined as view space width divided by height.</param>
/// <param name="nearPlaneDistance">Distance to the near view plane.</param>
/// <param name="farPlaneDistance">Distance to the far view plane.</param>
/// <returns>The perspective projection matrix.</returns>
public static Matrix4x4 CreatePerspectiveFieldOfView(float fieldOfView, float aspectRatio, float nearPlaneDistance, float farPlaneDistance)
{
if (fieldOfView <= 0.0f || fieldOfView >= Math.PI)
throw new ArgumentOutOfRangeException("fieldOfView");
if (nearPlaneDistance <= 0.0f)
throw new ArgumentOutOfRangeException("nearPlaneDistance");
if (farPlaneDistance <= 0.0f)
throw new ArgumentOutOfRangeException("farPlaneDistance");
if (nearPlaneDistance >= farPlaneDistance)
throw new ArgumentOutOfRangeException("nearPlaneDistance");
float yScale = 1.0f / (float)Math.Tan(fieldOfView * 0.5f);
float xScale = yScale / aspectRatio;
Matrix4x4 result;
result.M11 = xScale;
result.M12 = result.M13 = result.M14 = 0.0f;
result.M22 = yScale;
result.M21 = result.M23 = result.M24 = 0.0f;
result.M31 = result.M32 = 0.0f;
result.M33 = farPlaneDistance / (nearPlaneDistance - farPlaneDistance);
result.M34 = -1.0f;
result.M41 = result.M42 = result.M44 = 0.0f;
result.M43 = nearPlaneDistance * farPlaneDistance / (nearPlaneDistance - farPlaneDistance);
return result;
}
This snippet is ARKit - it gets the position of the camera... https://gist.github.com/fisherds/a2a75c914c293213f594daf5fd940d3d
@IBAction func pressedAddEarth(_ sender: Any) {
guard let pointOfView = sceneView.pointOfView else {return}
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, -transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
let currentPositionOfCamera = orientation + location
// other stuff
}
func +(left: SCNVector3, right: SCNVector3) -> SCNVector3 {
return SCNVector3Make(left.x + right.x, left.y + right.y, left.z + right.z)
}
Upvotes: 1
Reputation: 58053
Based on @BlackMirrorz's answer.
The obvious practical benefit of using m41
, m42
, and m43
elements in SCNMatrix4
appears when employing SceneKit's camera projection.
Upvotes: 1
Reputation: 7385
I could be wrong but I think m41
, m42
, and m43
can be used to get positional data and is essentially the same as using result.worldTransform.columns.3
when performing a hitTest.
As such when placing an SCNNode
via performing an ARSCNHitTest
you could use either:
let hitTestTransform = SCNMatrix4(result.worldTransform)
let positionFromMatrix4 = SCNVector3(hitTestTransform.m41, hitTestTransform.m42, hitTestTransform.m43)
let positionFromColumns = SCNVector3(result.worldTransform.columns.3.x, result.worldTransform.columns.3.y, result.worldTransform.columns.3.z)
The below example should help clarify things:
/// Places Our Model At The Position Of An Existining ARPlaneAnchor
///
/// - Parameter gesture: UITapGestureRecognizer
@IBAction func placeModel(_ gesture: UITapGestureRecognizer){
//1. Get The Touch Location
let touchLocation = gesture.location(in: self.augmentedRealityView)
//2. Perform An ARSCNHitTest For Any Existing Planes
guard let result = self.augmentedRealityView.hitTest(touchLocation, types: [.existingPlane, .existingPlaneUsingExtent]).first else { return }
//3. Get The World Transform
let hitTestTransform = SCNMatrix4(result.worldTransform)
//4. Initialize Our Position Either From .m41, .m42, .m43 Or From Columns.3
let positionFromMatrix4 = SCNVector3(hitTestTransform.m41, hitTestTransform.m42, hitTestTransform.m43)
let positionFromColumns = SCNVector3(result.worldTransform.columns.3.x, result.worldTransform.columns.3.y, result.worldTransform.columns.3.z)
//5. Log Them To Check I'm Not Being A Moron
print(
"""
Position From Matrix 4 == \(positionFromMatrix4)
Position From Columns == \(positionFromColumns)
""")
/*
Position From Matrix 4 == SCNVector3(x: -0.39050543, y: -0.004766479, z: 0.08107365)
Position From Columns == SCNVector3(x: -0.39050543, y: -0.004766479, z: 0.08107365)
*/
//6. Add A Node At The Position & Add It To The Hierachy
let boxNode = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0))
boxNode.geometry?.firstMaterial?.diffuse.contents = UIColor.cyan
boxNode.position = positionFromMatrix4
self.augmentedRealityView.scene.rootNode.addChildNode(boxNode)
}
Hope it helps...
Upvotes: 4