Reputation: 177
So Im having trouble using Microsoft's Emotion API for Android. I have no issues with regards to running the Face API; Im able to get the face rectangles but I am not able to get it working on the emotion api. I am taking images using the builtin Android camera itself. Here is the code I am using:
private void detectAndFrame(final Bitmap imageBitmap)
{
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
imageBitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream);
ByteArrayInputStream inputStream =
new ByteArrayInputStream(outputStream.toByteArray());
AsyncTask<InputStream, String, List<RecognizeResult>> detectTask =
new AsyncTask<InputStream, String, List<RecognizeResult>>() {
@Override
protected List<RecognizeResult> doInBackground(InputStream... params) {
try {
Log.e("i","Detecting...");
faces = faceServiceClient.detect(
params[0],
true, // returnFaceId
false, // returnFaceLandmarks
null // returnFaceAttributes: a string like "age, gender"
);
if (faces == null)
{
Log.e("i","Detection Finished. Nothing detected");
return null;
}
Log.e("i",
String.format("Detection Finished. %d face(s) detected",
faces.length));
ImageView imageView = (ImageView)findViewById(R.id.imageView);
InputStream stream = params[0];
com.microsoft.projectoxford.emotion.contract.FaceRectangle[] rects = new com.microsoft.projectoxford.emotion.contract.FaceRectangle[faces.length];
for (int i = 0; i < faces.length; i++) {
com.microsoft.projectoxford.face.contract.FaceRectangle rect = faces[i].faceRectangle;
rects[i] = new com.microsoft.projectoxford.emotion.contract.FaceRectangle(rect.left, rect.top, rect.width, rect.height);
}
List<RecognizeResult> result;
result = client.recognizeImage(stream, rects);
return result;
} catch (Exception e) {
Log.e("e", e.getMessage());
Log.e("e", "Detection failed");
return null;
}
}
@Override
protected void onPreExecute() {
//TODO: show progress dialog
}
@Override
protected void onProgressUpdate(String... progress) {
//TODO: update progress
}
@Override
protected void onPostExecute(List<RecognizeResult> result) {
ImageView imageView = (ImageView)findViewById(R.id.imageView);
imageView.setImageBitmap(drawFaceRectanglesOnBitmap(imageBitmap, faces));
MediaStore.Images.Media.insertImage(getContentResolver(), imageBitmap, "AnImage" ,"Another image");
if (result == null) return;
for (RecognizeResult res: result) {
Scores scores = res.scores;
Log.e("Anger: ", ((Double)scores.anger).toString());
Log.e("Neutral: ", ((Double)scores.neutral).toString());
Log.e("Happy: ", ((Double)scores.happiness).toString());
}
}
};
detectTask.execute(inputStream);
}
I keep getting the error Post Request 400, indicating some sort of issue with the JSON or the face rectangles. But I'm not sure where to start debugging this issue.
Upvotes: 0
Views: 371
Reputation: 2973
You're using the stream twice, so the second time around you're already at the end of the stream. So either you can reset the stream, or, simply call the emotion API without rectangles (ie skip the call to the face API.) The emotion API will determine the face rectangles for you.
Upvotes: 1