Reputation: 2728
I have the following ColorMatrixFilter. But I want to use it as a mask for Subtract-Blend mode, instead of using it directly. How do I go about achieving this?
ColorMatrix:
colorMatrix[
0.393, 0.7689999, 0.18899999, 0, 0,
0.349, 0.6859999, 0.16799999, 0, 0,
0.272, 0.5339999, 0.13099999, 0, 0,
0, 0, 0, 1, 0
];
Upvotes: 56
Views: 2957
Reputation: 8978
There is no subtract color blending out of the box in the Android SDK, however you still can make it work with use of OpenGL rendering API. Here you can find the implementation of such a solution, incapsulated in the BlendingFilterUtil
class, which can be used like this:
BlendingFilterUtil.subtractMatrixColorFilter(bitmap, new float[]{
0.393f, 0.7689999f, 0.18899999f, 0, 0,
0.349f, 0.6859999f, 0.16799999f, 0, 0,
0.272f, 0.5339999f, 0.13099999f, 0, 0,
0, 0, 0, 1, 0
}, activity, callback);
First of all "using a color filter for Subtract-Blend mode" is a very vague requirement. In order to understand the problem better, let's define two distinct set of features: color blending and color filtering in Android.
Color blending is quite a known thing among designers and people working with computer graphics. It commonly means blending two colors using their channel values (known as Red, Green, Blue and Alpha) and a blending function. The blending functions are referred to as Blend Modes and one of this modes is called Subtract. The Subtract Blend mode uses the following formula to get its final color:
Where Cout is the resulting color, Cdst is the "current" color and Csrc is a color value used to change original color. If for any channel the difference is negative, 0 value is used instead. Roughly speaking, with this mode you can make the destination color darker than the original color, since the channels get closer to zero as a result of the function. Here you can find a very illustrative example of this mode in action:
Destination
Source
Color Blending Output
In context of the Android SDK, color filtering is a super-set of operations, which has color blending functions included. The reference of ColorFilter
subclasses gives comprehensive information on available color filtering options in the SDK:
PorterDuffColorFilter
is essentially the Blend Modes discussed above;LightingColorFilter
takes two arguments, one of which is used as a factor and another as an addition for Red, Green and Blue channels. Alpha channel remains untouched. So you can make some image look brighter (or darker, if factor is between 0 and 1, or addition is negative).ColorMatrixColorFilter
takes an instance of ColorMatrix
and use it to calculate the final color as follows:4x5 matrix for transforming the color and alpha components of a Bitmap. The matrix can be passed as single array, and is treated as follows:
[ a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t ]
When applied to a color [R, G, B, A], the resulting color is computed as:
R’ = a*R + b*G + c*B + d*A + e; G’ = f*R + g*G + h*B + i*A + j; B’ = k*R + l*G + m*B + n*A + o; A’ = p*R + q*G + r*B + s*A + t;
Now we know that the only filtering operation in Android SDK that takes a ColorMatrix
is ColorMatrixColorFilter
. However it has nothing to do with color blending as color blending is a result of mixing two colors, while ColorMatrixColorFilter
just modifies the input color. Here is how one of the sample images filtered with use of matrix from the question looks like:
The only way of mixing these two concepts I can think of is by using the result of ColorMatrixColorFilter
as an argument to the subtract blending function (Csrc), so we end up with the following formula to implement:
The task is not supposed to be something fancy: we could use a ColorMatrixColorFilter
and then use subsequent PorterDuffColorFilter
with subtract mode, using filtered result as the source color. However, If you take a closer look at PorterDuff.Mode
reference, you will notice that Android does not have the Subtract Blend Mode in its facilities, (Android OS uses Google's Skia library underneath for canvas drawing and for some reason it really lacks Subtract mode) so we will have to do our subtraction another way.
Such a thing is comparatively simple in Open GL rendering API, but it will requires us to deal with challenges of setting up the Open GL context so it allows us to draw what we need the way we need it.
Android already has GLSurfaceView
, that sets up Open GL context under the hood, but it has to be in the view hierarchy in order to actually perform any rendering operation. My plan is to instantiate a GLSurfaceView
, attach it the application window, give it an image that we want to apply our effects to and perform all the fancy stuff behind the scenes. After that we can take the resulting image and the view can be silently removed.
GLSurfaceView
First, instantiate a GLSurfaceView
, set OpenGL API version and context configuration:
GLSurfaceView hostView = new GLSurfaceView(activityContext);
hostView.setEGLContextClientVersion(2);
hostView.setEGLConfigChooser(8, 8, 8, 8, 0, 0);
Now the view has to be added into the view hierarchy:
// View should be of bitmap size
final WindowManager.LayoutParams layoutParams = new WindowManager.LayoutParams(width, height, TYPE_APPLICATION, 0, PixelFormat.OPAQUE);
view.setLayoutParams(layoutParams);
final WindowManager windowManager = (WindowManager) view.getContext().getSystemService(Context.WINDOW_SERVICE);
Objects.requireNonNull(windowManager).addView(view, layoutParams);
I just put it into the root window in order to make it available from any activity in the appllication. The width
and height
params of the layout should match width
and height
of the Bitmap
, so the resulting image doesn't end up with a different size.
GLSurfaceView
draws nothing itself. This work is to be done by a Renderer
. Here how the initial implementation of the interface for the given problem looks like:
class BlendingFilterRenderer implements GLSurfaceView.Renderer {
private final Bitmap mBitmap;
private final WeakReference<GLSurfaceView> mHostViewReference;
private final float[] mColorFilter;
private final BlendingFilterUtil.Callback mCallback;
private boolean mFinished = false;
BlendingFilterRenderer(@NonNull GLSurfaceView hostView, @NonNull Bitmap bitmap,
@NonNull float[] colorFilter,
@NonNull BlendingFilterUtil.Callback callback) throws IllegalArgumentException {
if (colorFilter.length != 4 * 5) {
throw new IllegalArgumentException("Color filter should be a 4 x 5 matrix");
}
mBitmap = bitmap;
mHostViewReference = new WeakReference<>(hostView);
mColorFilter = colorFilter;
mCallback = callback;
}
// ========================================== //
// GLSurfaceView.Renderer
// ========================================== //
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {}
@Override
public void onDrawFrame(GL10 gl) {}
}
mBitmap
- since rendering operations happen in a separate thred, the renderer has retain the Bitmap
argument until the OpenGL context is ready.mHostViewReference
- a weak reference to the view object will be needed in order to remove it from the window when the work is done.mColorFilter
- ColorMatrix
object is not really required in this implementation, thus I use plain float[]
java array to represent the color matrix.mCallback
- The result will be delivered via a callback, which a defined as follows:interface Callback {
void onSuccess(@NonNull Bitmap blendedImage);
void onFailure(@Nullable Exception error);
}
mFinished
- I'm not sure why exactly, but while playing with the Renderer
object, I found that it performs redundant cycles of the rendering. This prevents the program from doing anything when it's no longer needed. I also recommend set rendering mode of the GLSurfaceView
object to RENDERMODE_WHEN_DIRTY
to prevent a 60-times-per-second drawing:hostView.setRenderer(new BlendingFilterRenderer(hostView, image, filterValues, callback));
hostView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
In order to draw a texel OpenGL first needs some surface to work on. For both drawing the image and building the canvas (the surface to drawn on), we have to introduce a couple of shader programs (Vertex and Fragment shaders in terms of OpenGL). The shaders are compiled and loaded via calls to OpenGL API, and first we need to define a method that takes the shader source code, compiles it and checks that it was done without errors (inside of the BlendingFilterRenderer
class):
private int loadShader(int type, String shaderCode) throws GLException {
int reference = GLES20.glCreateShader(type);
GLES20.glShaderSource(reference, shaderCode);
GLES20.glCompileShader(reference);
int[] compileStatus = new int[1];
GLES20.glGetShaderiv(reference, GLES20.GL_COMPILE_STATUS, compileStatus, 0);
if (compileStatus[0] != GLES20.GL_TRUE) {
GLES20.glDeleteShader(reference);
final String message = GLES20.glGetShaderInfoLog(reference);
throw new GLException(compileStatus[0], message);
}
return reference;
}
The first parameter of the method defines the shader type (Vertex or Fragment), the second one contains the actual shader code as a String
. Let's start with very minimalistic Vertex shader which just takes a vertex coordinates (given as a normalized 2-dimensional vector) and inject them into the gl_Position
variable (essentially the resulting value of the shader):
attribute vec2 aPosition;
void main() {
gl_Position = vec4(aPosition.x, aPosition.y, 0.0, 1.0);
}
The Fragment shader implementation just outputs a white color without any changes for now:
precision mediump float;
void main() {
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
}
OpenGL ES 2 requires us to specify float precision explicitly, otherwise this program wont compile. This shader also writes to global variable gl_FragColor
, that defines the output color.
With help of previously defined loadShader
method and the shaders source code, we can now define another method in the BlendingFilterRenderer
class, which compiles and link both shaders into a program:
private int loadProgram() {
int fragmentShader = loadShader(GLES20.GL_FRAGMENT_SHADER, "precision mediump float;" +
"void main() {" +
" gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);" +
"}");
int vertexShader = loadShader(GLES20.GL_VERTEX_SHADER, "attribute vec2 aPosition;" +
"void main() {" +
" gl_Position = vec4(aPosition.x, aPosition.y, 0.0, 1.0);" +
"}");
int programReference = GLES20.glCreateProgram();
GLES20.glAttachShader(programReference, vertexShader);
GLES20.glAttachShader(programReference, fragmentShader);
GLES20.glLinkProgram(programReference);
return programReference;
}
Now, when the program is ready, we can pass it some arguments. First, define a method in the BlendingFilterRenderer
class that enables the attributes in the shaders:
private void enableVertexAttribute(int program, String attributeName, int size, int stride, int offset) {
final int attributeLocation = GLES20.glGetAttribLocation(program, attributeName);
GLES20.glVertexAttribPointer(attributeLocation, size, GLES20.GL_FLOAT, false, stride, offset);
GLES20.glEnableVertexAttribArray(attributeLocation);
}
In order to built a canvas the whole view port needs to be filled. It can be done with merely 4 vertices in the normalized device coordinate system (NDCS):
new float[] {
-1, 1,
-1, -1,
1, 1,
1, -1,
}
This array has to be loaded into a OpenGL's array buffer in order to become accessible to the shaders:
private FloatBuffer convertToBuffer(float[] array) {
final ByteBuffer buffer = ByteBuffer.allocateDirect(array.length * PrimitiveSizes.FLOAT);
FloatBuffer output = buffer.order(ByteOrder.nativeOrder()).asFloatBuffer();
output.put(array);
output.position(0);
return output;
}
private void initVertices(int programReference) {
final float[] verticesData = new float[] {
-1, 1,
-1, -1,
1, 1,
1, -1,
}
int buffers[] = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, verticesData.length * 4, convertToBuffer(verticesData), GLES20.GL_STREAM_DRAW);
enableVertexAttribute(programReference, "aPosition", 2, 0, 0);
}
The only thing left is to put everything together in the Renderer
interface functions (which will be automatically called by the owning GLSurfaceView
object):
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
final int program = loadProgram();
GLES20.glUseProgram(program);
initVertices(program);
}
@Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
At this point this class should compile and be able to draw a white rectangle inside of the given view.
The next step is to draw an actual image on the surface we prepared. In order to do that the vertex shader should take texture coordinates in addition to vertex coordinates:
attribute vec2 aPosition;
attribute vec2 aTextureCoord;
varying vec2 vTextureCoord;
void main() {
gl_Position = vec4(aPosition.x, aPosition.y, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
In turn, the fragment shader now takes the interpolated texture color and apply it to the output value.
precision mediump float;
uniform sampler2D uSampler;
varying vec2 vTextureCoord;
void main() {
gl_FragColor = texture2D(uSampler, vTextureCoord);
}
Texture coordinates vary from 0.0 to 1.0 for x and y, with the beginning (0.0, 0.0) at the bottom left corner. Change you initVertices
to look as follow:
private void initVertices(int programReference) {
final float[] verticesData = new float[] {
//NDCS coords //Texture coords
-1, 1, 0, 1,
-1, -1, 0, 0,
1, 1, 1, 1,
1, -1, 1, 0
}
int buffers[] = new int[1];
GLES20.glGenBuffers(1, buffers, 0);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, verticesData.length * 4, convertToBuffer(verticesData), GLES20.GL_STREAM_DRAW);
final int stride = 4 * 4;
enableVertexAttribute(programReference, "aPosition", 2, stride, 0);
enableVertexAttribute(programReference, "aTextureCoord", 2, stride, 2 * 4);
}
The next method below attachTexture
passes the source image to the texture sampler of the fragment shader uSampler
:
private void attachTexture(int programReference) {
final int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
final int textureId = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
GLES20.glPixelStorei(GLES20.GL_UNPACK_ALIGNMENT, 1);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, mBitmap, 0);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
final int samplerLocation = GLES20.glGetUniformLocation(programReference, "uSampler");
GLES20.glUniform1i(samplerLocation, 0);
}
The method has to be called from onSurfaceChanged
method:
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
final int program = loadProgram();
GLES20.glUseProgram(program);
initVertices(program);
attachTexture(program);
}
Now we are all set to apply the color filter. The color filter is a 4x5 matrix, however OpenGL ES 2 has only matrices up to 4x4 dimension, thus we have to define a new structure, that can fit our color filter in form of a 4x4 matrix and a vector of 4 elements:
precision mediump float;
struct ColorFilter {
mat4 factor;
vec4 shift;
};
uniform sampler2D uSampler;
uniform ColorFilter uColorFilter;
varying vec2 vTextureCoord;
void main() {
vec4 originalColor = texture2D(uSampler, vTextureCoord);
vec4 filteredColor = (originalColor * uColorFilter.factor) + uColorFilter.shift;
gl_FragColor = originalColor - filteredColor;
}
attachColorFilter
method will help us to pass the filter matrix to the shader:
private void attachColorFilter(int program) {
final float[] colorFilterFactor = new float[4 * 4];
final float[] colorFilterShift = new float[4];
for (int i = 0; i < mColorFilter.length; i++) {
final float value = mColorFilter[i];
final int calculateIndex = i + 1;
if (calculateIndex % 5 == 0) {
colorFilterShift[calculateIndex / 5 - 1] = value / 255;
} else {
colorFilterFactor[i - calculateIndex / 5] = value;
}
}
final int colorFactorLocation = GLES20.glGetUniformLocation(program, "uColorFilter.factor");
GLES20.glUniformMatrix4fv(colorFactorLocation, 1, false, colorFilterFactor, 0);
final int colorShiftLocation = GLES20.glGetUniformLocation(program, "uColorFilter.shift");
GLES20.glUniform4fv(colorShiftLocation, 1, colorFilterShift, 0);
}
You also need to call this method in onSurfaceChanged
method:
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
final int program = loadProgram();
GLES20.glUseProgram(program);
initVertices(program);
attachTexture(program);
attachColorFilter(program);
}
Our OpenGL context has alpha channel buffer enabled (it was configured via hostView.setEGLConfigChooser(8, 8, 8, 8, 0, 0);
) otherwise we would always get some background for the output image (that is not correct, taking into account that png images tend to have different alpha channels for some pixels). This however breaks blending of alpha channels of background surface and the texture. That is not a big deal to implement it ourselves, however:
precision mediump float;
struct ColorFilter {
mat4 factor;
vec4 shift;
};
uniform sampler2D uSampler;
uniform ColorFilter uColorFilter;
varying vec2 vTextureCoord;
void main() {
vec4 originalColor = texture2D(uSampler, vTextureCoord);
originalColor.rgb *= originalColor.a;
vec4 filteredColor = (originalColor * uColorFilter.factor) + uColorFilter.shift;
filteredColor.rgb *= filteredColor.a;
gl_FragColor = vec4(originalColor.rgb - filteredColor.rgb, originalColor.a);
}
I also recommend to set the blend function to the following, so our output is not any affected by whatever is currently in the color buffer and behavior is closer to the Android's ImageView
. However we didn't set color for clear color and it doesn't seem to change anything:
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ZERO);
}
The work is pretty much done at this point, the implementation only needs to return the result to the callback. First let's get bitmap from the GLSurfaceView
, there is one brilliant solution that I borrowed from another stackoverflow answer:
private Bitmap retrieveBitmapFromGl(int width, int height) {
final ByteBuffer pixelBuffer = ByteBuffer.allocateDirect(width * height * PrimitiveSizes.FLOAT);
pixelBuffer.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0,0, width, height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, pixelBuffer);
final Bitmap image = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
image.copyPixelsFromBuffer(pixelBuffer);
return image;
}
Now just take the bitmap, check for errors and return the result:
private GLException getGlError() {
int errorValue = GLES20.glGetError();
switch (errorValue) {
case GLES20.GL_NO_ERROR:
return null;
default:
return new GLException(errorValue);
}
}
private void postResult() {
if (mFinished) {
return;
}
final GLSurfaceView hostView = mHostViewReference.get();
if (hostView == null) {
return;
}
GLException glError = getGlError();
if (glError != null) {
hostView.post(() -> {
mCallback.onFailure(glError);
removeHostView(hostView);
});
} else {
final Bitmap result = retrieveBitmapFromGl(mBitmap.getWidth(), mBitmap.getHeight());
hostView.post(() -> {
mCallback.onSuccess(result);
removeHostView(hostView);
});
}
mFinished = true;
}
private void removeHostView(@NonNull GLSurfaceView hostView) {
if (hostView.getParent() == null) {
return;
}
final WindowManager windowManager = (WindowManager) hostView.getContext().getSystemService(Context.WINDOW_SERVICE);
Objects.requireNonNull(windowManager).removeView(hostView);
}
The method needs to be called from onDrawFrame
method:
@Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
postResult();
}
Let's play around with the utility we just made. The all-zero filter should not affect the original image at all:
BlendingFilterUtil.subtractMatrixColorFilter(bitmap, new float[]{
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 0, 0
}, activity, callback);
The original image is on the left and the blended image is on the right. They are the same, as expected. You can also remove specific channels completely with this approach. E.g. here how red and green channels could be removed:
BlendingFilterUtil.subtractMatrixColorFilter(bitmap, new float[]{
1, 0, 0, 0, 0,
0, 1, 0, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0
}, activity, callback);
Finally, here is the result for the filter given in the question:
BlendingFilterUtil.subtractMatrixColorFilter(bitmap, new float[]{
0.393f, 0.7689999f, 0.18899999f, 0, 0,
0.349f, 0.6859999f, 0.16799999f, 0, 0,
0.272f, 0.5339999f, 0.13099999f, 0, 0,
0, 0, 0, 1, 0
}, activity, callback);
If you struggle at any step, don't hesitate to refer to the gist with the complete code of the utility.
Upvotes: 20
Reputation: 383
I'm not an expert in computer graphics but I'm assuming you want to iterate through every pixel of the image you want to blend, center your colorMatrix
on each pixel, calculate the average using the surrounding pixels your matrix comes into contact with, then apply this average to your pixel. Obviously you will somehow need to handle the edge pixels.
Example: Suppose you have a 5x4 image with pixel values likes so
1 2 3 4 5
1 1000 1000 1000 1000 1000
2 1000 1000 1000 1000 1000
3 1000 1000 1000 1000 1000
4 1000 1000 1000 1000 1000
(1) Taking the pixel at position (3,3)
and applying your transformation matrix - i.e multiplying image pixel (i,j)
with matrix position (i,j)
- we get
1 2 3 4 5
1 393 769 189 0 0
2 349 686 168 0 0
3 272 534 131 0 0
4 0 0 0 1000 0
(2) Now taking the average of this transformation - i.e add all the numbers and divide by 20 - we get 224.5 or approximately 225. So our newly transformed image will look like
1 2 3 4 5
1 1000 1000 1000 1000 1000
2 1000 1000 1000 1000 1000
3 1000 1000 225 1000 1000
4 1000 1000 1000 1000 1000
To get the full subtract blend, do this for every pixel.
EDIT: actually I think the above might be a Gaussian blur.
Upvotes: 0