Hank
Hank

Reputation: 2646

Zooming in map with OpenGL-ES2 Android

I have created a pinch zoom with a scale detector which in turn calls the following renderer. This uses the projection matrix to do the zoom and then scales the eye per the zoom when panning.

public class vboCustomGLRenderer  implements GLSurfaceView.Renderer {

    // Store the model matrix. This matrix is used to move models from object space (where each model can be thought
    // of being located at the center of the universe) to world space.

    private float[] mModelMatrix = new float[16];


    // Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
    // it positions things relative to our eye.

    private float[] mViewMatrix = new float[16];

    // Store the projection matrix. This is used to project the scene onto a 2D viewport.
    private float[] mProjectionMatrix = new float[16];

    // Allocate storage for the final combined matrix. This will be passed into the shader program.
    private float[] mMVPMatrix = new float[16];

    // This will be used to pass in the transformation matrix.
    private int mMVPMatrixHandle;

    // This will be used to pass in model position information.
    private int mPositionHandle;

    // This will be used to pass in model color information.
    private int mColorUniformLocation;

    // How many bytes per float.
    private final int mBytesPerFloat = 4;   

    // Offset of the position data.
    private final int mPositionOffset = 0;

    // Size of the position data in elements.
    private final int mPositionDataSize = 3;

    // How many elements per vertex for double values.
    private final int mPositionFloatStrideBytes = mPositionDataSize * mBytesPerFloat;


    // Position the eye behind the origin.
    public double eyeX = default_settings.mbrMinX + ((default_settings.mbrMaxX - default_settings.mbrMinX)/2);
    public double eyeY = default_settings.mbrMinY + ((default_settings.mbrMaxY - default_settings.mbrMinY)/2);

    // Position the eye behind the origin.
    //final float eyeZ = 1.5f;
    public float eyeZ = 1.5f;

    // We are looking toward the distance
    public double lookX = eyeX;
    public double lookY = eyeY;
    public float lookZ = 0.0f;

    // Set our up vector. This is where our head would be pointing were we holding the camera.
    public float upX = 0.0f;
    public float upY = 1.0f;
    public float upZ = 0.0f;

    public double mScaleFactor = 1;
    public double mScrnVsMapScaleFactor = 0;

    public vboCustomGLRenderer() {}

    public void setEye(double x, double y){

        eyeX -= (x / screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (y / screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the camera position (View matrix)
        Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

    }

    public void setScaleFactor(float scaleFactor, float gdx, float gdy){

        mScaleFactor *= scaleFactor;

        mRight = mRight / scaleFactor;
        mLeft = -mRight;
        mTop = mTop / scaleFactor;
        mBottom = -mTop;

        //Need to calculate the shift in the eye when zooming on a particular spot.
        //So get the distance between the zoom point and eye point, figure out the
        //new eye point by getting the factor of this distance.
        double eyeXShift = (((mWidth  / 2) - gdx) - (((mWidth  / 2) - gdx) / scaleFactor));
        double eyeYShift = (((mHeight / 2) - gdy) - (((mHeight / 2) - gdy) / scaleFactor));

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        eyeX -= (eyeXShift / screen_vs_map_horz_ratio);
        lookX = eyeX;
        eyeY += (eyeYShift / screen_vs_map_vert_ratio);
        lookY = eyeY;

        // Set the scale (Projection matrix)
        Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);
    }

    @Override
    public void onSurfaceCreated(GL10 unused, EGLConfig config) {


        // Set the background frame color
        //White
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);

        // Set the view matrix. This matrix can be said to represent the camera position.
        // NOTE: In OpenGL 1, a ModelView matrix is used, which is a combination of a model and
        // view matrix. In OpenGL 2, we can keep track of these matrices separately if we choose.
        Matrix.setLookAtM(mViewMatrix, 0, (float)eyeX, (float)eyeY, eyeZ, (float)lookX, (float)lookY, lookZ, upX, upY, upZ);

        final String vertexShader =
            "uniform mat4 u_MVPMatrix;      \n"     // A constant representing the combined model/view/projection matrix.

          + "attribute vec4 a_Position;     \n"     // Per-vertex position information we will pass in.
          + "attribute vec4 a_Color;        \n"     // Per-vertex color information we will pass in.              

          + "varying vec4 v_Color;          \n"     // This will be passed into the fragment shader.

          + "void main()                    \n"     // The entry point for our vertex shader.
          + "{                              \n"
          + "   v_Color = a_Color;          \n"     // Pass the color through to the fragment shader. 
                                                    // It will be interpolated across the triangle.
          + "   gl_Position = u_MVPMatrix   \n"     // gl_Position is a special variable used to store the final position.
          + "               * a_Position;   \n"     // Multiply the vertex by the matrix to get the final point in                                                                   
          + "}                              \n";    // normalized screen coordinates.

        final String fragmentShader =
                "precision mediump float;       \n"     // Set the default precision to medium. We don't need as high of a 
                                                        // precision in the fragment shader.                
              + "uniform vec4 u_Color;          \n"     // This is the color from the vertex shader interpolated across the 
                                                        // triangle per fragment.             
              + "void main()                    \n"     // The entry point for our fragment shader.
              + "{                              \n"
              + "   gl_FragColor = u_Color;     \n"     // Pass the color directly through the pipeline.          
              + "}                              \n";                                                

        // Load in the vertex shader.
        int vertexShaderHandle = GLES20.glCreateShader(GLES20.GL_VERTEX_SHADER);

        if (vertexShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(vertexShaderHandle, vertexShader);

            // Compile the shader.
            GLES20.glCompileShader(vertexShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(vertexShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(vertexShaderHandle);
                vertexShaderHandle = 0;
            }
        }

        if (vertexShaderHandle == 0)
        {
            throw new RuntimeException("Error creating vertex shader.");
        }

        // Load in the fragment shader shader.
        int fragmentShaderHandle = GLES20.glCreateShader(GLES20.GL_FRAGMENT_SHADER);

        if (fragmentShaderHandle != 0) 
        {
            // Pass in the shader source.
            GLES20.glShaderSource(fragmentShaderHandle, fragmentShader);

            // Compile the shader.
            GLES20.glCompileShader(fragmentShaderHandle);

            // Get the compilation status.
            final int[] compileStatus = new int[1];
            GLES20.glGetShaderiv(fragmentShaderHandle, GLES20.GL_COMPILE_STATUS, compileStatus, 0);

            // If the compilation failed, delete the shader.
            if (compileStatus[0] == 0) 
            {               
                GLES20.glDeleteShader(fragmentShaderHandle);
                fragmentShaderHandle = 0;
            }
        }

        if (fragmentShaderHandle == 0)
        {
            throw new RuntimeException("Error creating fragment shader.");
        }

        // Create a program object and store the handle to it.
        int programHandle = GLES20.glCreateProgram();

        if (programHandle != 0) 
        {
            // Bind the vertex shader to the program.
            GLES20.glAttachShader(programHandle, vertexShaderHandle);           

            // Bind the fragment shader to the program.
            GLES20.glAttachShader(programHandle, fragmentShaderHandle);

            // Bind attributes
            GLES20.glBindAttribLocation(programHandle, 0, "a_Position");
            GLES20.glBindAttribLocation(programHandle, 1, "a_Color");

            // Link the two shaders together into a program.
            GLES20.glLinkProgram(programHandle);

            // Get the link status.
            final int[] linkStatus = new int[1];
            GLES20.glGetProgramiv(programHandle, GLES20.GL_LINK_STATUS, linkStatus, 0);

            // If the link failed, delete the program.
            if (linkStatus[0] == 0) 
            {               
                GLES20.glDeleteProgram(programHandle);
                programHandle = 0;
            }
        }

        if (programHandle == 0)
        {
            throw new RuntimeException("Error creating program.");
        }

        // Set program handles. These will later be used to pass in values to the program.
        mMVPMatrixHandle = GLES20.glGetUniformLocation(programHandle, "u_MVPMatrix");
        mPositionHandle = GLES20.glGetAttribLocation(programHandle, "a_Position");
        mColorUniformLocation = GLES20.glGetUniformLocation(programHandle, "u_Color");

        // Tell OpenGL to use this program when rendering.
        GLES20.glUseProgram(programHandle);
    }

    static double mWidth = 0;
    static double mHeight = 0;
    static double mLeft = 0;
    static double mRight = 0;
    static double mTop = 0;
    static double mBottom = 0;
    static double mRatio = 0;
    double screen_width_height_ratio;
    double screen_height_width_ratio;
    final float near = 1.5f;
    final float far = 10.0f;

    double screen_vs_map_horz_ratio = 0;
    double screen_vs_map_vert_ratio = 0;

    @Override
    public void onSurfaceChanged(GL10 unused, int width, int height) {

        // Adjust the viewport based on geometry changes,
        // such as screen rotation
        // Set the OpenGL viewport to the same size as the surface.
        GLES20.glViewport(0, 0, width, height);

        screen_width_height_ratio = (double) width / height;
        screen_height_width_ratio = (double) height / width;

        //Initialize
        if (mRatio == 0){
            mWidth = (double) width;
            mHeight = (double) height;

            //map height to width ratio
            double map_extents_width = default_settings.mbrMaxX - default_settings.mbrMinX;
            double map_extents_height = default_settings.mbrMaxY - default_settings.mbrMinY;
            double map_width_height_ratio = map_extents_width/map_extents_height;
            if (screen_width_height_ratio > map_width_height_ratio){
                mRight = (screen_width_height_ratio * map_extents_height)/2;
                mLeft = -mRight;
                mTop = map_extents_height/2;
                mBottom = -mTop;
            }
            else{
                mRight = map_extents_width/2;
                mLeft = -mRight;
                mTop = (screen_height_width_ratio * map_extents_width)/2;
                mBottom = -mTop;
            }

            mRatio = screen_width_height_ratio;
        }

        if (screen_width_height_ratio != mRatio){
            final double wRatio = width/mWidth;
            final double oldWidth = mRight - mLeft;
            final double newWidth = wRatio * oldWidth;
            final double widthDiff = (newWidth - oldWidth)/2;
            mLeft = mLeft - widthDiff;
            mRight = mRight + widthDiff;

            final double hRatio = height/mHeight;
            final double oldHeight = mTop - mBottom;
            final double newHeight = hRatio * oldHeight;
            final double heightDiff = (newHeight - oldHeight)/2;
            mBottom = mBottom - heightDiff;
            mTop = mTop + heightDiff;

            mWidth = (double) width;
            mHeight = (double) height;

            mRatio = screen_width_height_ratio;
        }

        screen_vs_map_horz_ratio = (mWidth/(mRight-mLeft));
        screen_vs_map_vert_ratio = (mHeight/(mTop-mBottom));

        Matrix.frustumM(mProjectionMatrix, 0, (float)mLeft, (float)mRight, (float)mBottom, (float)mTop, near, far);

    }


    ListIterator<mapLayer> orgNonAssetCatLayersList_it;
    ListIterator<FloatBuffer> mapLayerObjectList_it;
    ListIterator<Byte> mapLayerObjectTypeList_it;
    mapLayer MapLayer;

    @Override
    public void onDrawFrame(GL10 unused) {

        GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);

        drawPreset();

        orgNonAssetCatLayersList_it = default_settings.orgNonAssetCatMappableLayers.listIterator();
        while (orgNonAssetCatLayersList_it.hasNext()) {
            MapLayer = orgNonAssetCatLayersList_it.next();
            if (MapLayer.BatchedPointVBO != null){
            }
            if (MapLayer.BatchedLineVBO != null){
                drawLineString(MapLayer.BatchedLineVBO, MapLayer.lineStringObjColor);
            }
            if (MapLayer.BatchedPolygonVBO != null){
                drawPolygon(MapLayer.BatchedPolygonVBO, MapLayer.polygonObjColor);
            }
        }
    }

    private void drawPreset()
    {
        Matrix.setIdentityM(mModelMatrix, 0);

        // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
        // (which currently contains model * view).
        Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

        // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
        // (which now contains model * view * projection).
        Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

        GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
    }

    private void drawLineString(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        GLES20.glLineWidth(2.0f);
        GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
    }

    private void drawPolygon(final FloatBuffer geometryBuffer, final float[] colorArray)
    {
        // Pass in the position information
        geometryBuffer.position(mPositionOffset);
        GLES20.glVertexAttribPointer(mPositionHandle, mPositionDataSize, GLES20.GL_FLOAT, false, mPositionFloatStrideBytes, geometryBuffer);

        GLES20.glEnableVertexAttribArray(mPositionHandle);

        GLES20.glUniform4f(mColorUniformLocation, colorArray[0], colorArray[1], colorArray[2], 1f);

        GLES20.glLineWidth(1.0f);
        GLES20.glDrawArrays(GLES20.GL_LINES, 0, geometryBuffer.capacity()/mPositionDataSize);
    }
}

This works very well up until it gets to a certain level then the panning starts jumping. After testing I found that it was because the floating point value of the eye, could not cope with such a small shift in position. I keep my x and y eye values in doubles so it continues to calculate shifting positions, then when calling setLookAtM() I convert them to floats.

So need I need to change the way the zoom works. I was thinking instead of zooming with the projection, scaling the model larger or smaller. The setScaleFactor() function in my code will change, by removing the projection and eye shifting. There is a Matrix.scaleM(m,Offset,x,y,z) function but I am unsure how or where to implement this. Could use some suggestions on how to accomplish this.

[Edit] 24/7/2013 I tried altering setScaleFactor() like so:

public void setScaleFactor(float scaleFactor, float gdx, float gdy){
    mScaleFactor *= scaleFactor;
}

and in drawPreset()

private void drawPreset()
{
    Matrix.setIdentityM(mModelMatrix, 0);

        //*****Added scaleM
    Matrix.scaleM(mModelMatrix, 0, (float)mScaleFactor, (float)mScaleFactor, 1.0f);

    // This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
    // (which currently contains model * view).
    Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);

    // This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
    // (which now contains model * view * projection).
    Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);

    GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
}

Now as soon as you do a zoom the image disappears from the screen. Actually I found it right off to the right hand side. I could still pan over to it.

Still not sure on what I should be scaling to zoom, is it the model, view or view-model?

Upvotes: 1

Views: 1392

Answers (1)

Hank
Hank

Reputation: 2646

I have found out that if you take the center of your model back to the origin (0,0) it allows you to extend your zoom capabilities. With my x coord data which was between 152.6 and 152.7.

Taking it back to the origin by the offset 152.65, which needs to be applied to the data before loading it into the floatbuffer.

So the width of the data becomes 0.1 or 0.05 on each side, allowing for more precision on the trailing end of the value.

Upvotes: 1

Related Questions