COOKIES
COOKIES

Reputation: 444

Clearing up some 3D OpenGL "Magic" functionality

I've been reading up on OpenGL ES 2.0, SpriteKit and GLKit, but I still have no clue how to solve the following problems.

Problems:

1) Create a cylindrical 360 space that holds a AVCaptureVideoPreviewLayer and that rotates as the user turns his/hers phone. Don't pay too much attention to the rotation part, it's the OpenGL part that I'm concerned about. Like, how are the lines drawn? Once captured, how is the image placed where it was captured? What components are being used here? SpriteKit or OpenGL?

enter image description here enter image description here

2) Basically relates to (1) with more in-depth about how an image is placed. Say you have a Quaternion, how would you place an image in that 3D space?

Question/Concerns:

How would I even start? If anyone could give me a brief abstraction of what I should be looking to do, I'd greatly appreciate it. Even some code examples will be super helpful, as I understand it better if its written in code.

The problem is that there's no actual documentation regarding any of it, so, if you have any reference (even books?), I'd greatly appreciate it.

Thank you!

Upvotes: 0

Views: 114

Answers (1)

James Poag
James Poag

Reputation: 2380

I'll try to answer from easiest to hardest and maybe point you in a direction.

  • glDrawElements has a mode parameter that you can sepcify GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, etc... When you draw in OpenGL you want to try and minimize the number of draw calls you perform, so once you get the draw state set, submit as much geometry in a single call as you can. Basically, draw the entire cylinder's lines in one call. Fortunately, you only need to calculate them once and you'll handle the rotation with matrices.

  • Next, you'll need to learn matrices and shaders to draw your geometry. Read the NeHe tutorials, I think they've been updated for OpenGLES2. Basically, you have a view and projection matrix (MVP). The Projection is to project the 3D geometry into the 2d space (and normalize from -1..1). The view is where you will spin the geometry. My advice is to find an example on Github or the Apple Developer docs and hack it. Also, look for OpenGL compatible matrix libraries.

  • When you first start rotating, use the drag/swipe events to test so you can figure out which axis to rotate about (the y-axis) and which way is left/right. Then hook it up to your Motion events (accelerometer). Your motion events will be recorded in your view rotation and each frame the matrices change (and need to be remultiplied and reuploaded to your shader).

(This is when is starts to get hard, lol)

  • Lay your AVCaptureVideoPreviewLayer (AVLayer) over your EAGLView Layer, make it smaller like in the example.

  • Next, you will need to perform raycasting to figure out where the AVLayer intersects the cylinder by using the MVP matrix. Well, actually it's slightly more complicated, you have to unproject the camera image into the View space and then map it onto a texture for the cylinder.

Let me start over.

If you were to draw a line heading out from the view camera towards the AVLayer it would intersect the cylinder at different points. This intersection is where you would stitch the image to the cylinder. You know the cylinder's geometry (or equation), you know the view's rotation. You know the size and position of the AVLayer and the projection matrix.

The problem is that this is slow. So maybe you think you can get away with projecting backward from the cylinder to the camera to project the UV coordinates of the cylinder with the intersection points of the AVLayer. However, this causes a projection issue and the image looks skewed and stretched weirdly. Because you are linearly interpolating a non-linear projection.

Your next thought is to try the slow way and project every pixel to the cylinder's texture. This looks way better, but now there are holes.

Finally, you realize to combine both methods and project backward from the cylinder's texture to the view and 'read' the intersection of the AVLayer. Also, because you are writing to a texture, you decide to use RTT and let the GPU do the heavy lifting (much faster). The AVLayer is now just a texture that you are rendering to the cylinder texture.

The Bad news is that while there is a projection matrix that you can use in your fragment shader to project the flat AVLayer to the cylindrical walls, I don't know what that is. Also, you will probably need to render twice for any textures that transverse the seam.

The Cylinder has a transparent texture that you render to. Just add U,V coordinates to your cylinder based on the resolution of the 'final' image. Draw the Cylinder twice, once with a texture (as quads) and again (as lines).

Look in the NeHe tutorials for FBO render targets (RTT).

Also, it's possible to render to the Cylinder's texture without OpenGL, but you would need to do it in a separate thread and then reupload the cylinder's texture. This will allow you to use math to project the texture and scan it yourself. You can use the modulus operator to automatically wrap scanlines and not have to worry about doing it twice.

Disclaimer: This is off the top of my head how I would satisfy your requirements. I, personally, would look on github for a panoramic app and probably start hacking there to add in the other stuff (like the cylinder). YMMV

Upvotes: 1

Related Questions