I want to be able to click the touch screen and use the point touched as the starting coordinate for a ray to be used for picking.
How do I convert the point returned from touching the screen into something I can use in the GL world coordinates?
A search brings up lots of confusing possibilities, including the use of gluUnProject with lots of reports about whether it is supported and how to port it.
Can someone lay it out straight for me please?
I'm using Objective C, Xcode and I'm compiling for iphone.
Step 0: Get gluUnproject:
The reports of needing it are true. That function does all the heavy lifting for you. I know at one point the MESA project had an implementation the worked almost perfectly on iOS without modifications. I'm not sure if that's still available. Barring that, you'll just have to do some research on it and either roll your own or port someone else's. It's a bit heavy on the Linear Algebra, so good luck.
Step 1: Convert from UIKit coordinates to OpenGL coordinates:
This normally involves two things:
Flip the Y-coordinate, because UIKit likes its origins in the top left, whereas OpenGL likes its origins in the bottom left.
Convert from "Screen Units" to pixels. This keeps things consistent across standard and retina display devices.
Step 3: Use gluUnproject on your converted coordinate:
gluUnproject()
technically converts a 3D point in window space to a 3D point in world space. So, to get a ray, you'll need to call it twice: once for the near clipping plane and once for the far clipping plane. That will give you two points, from which you can get a ray. To callgluUnproject()
, you'll need access to your 2D view coordinate, the current OpenGL viewport, the current OpenGL model view matrix, and the current OpenGL projection matrix. Pseudocode: