Reputation: 193
I have a problem with Microsoft Azure Cognitive Services.
When i take a photo on my phone using the camera, by default it's saved in path:
DCIM/Camera
, but when i'm taking a photo using my application it's saved in
Integral storage/Pictures/temp
A brief description of the situation:
1) I'm taking photos using the default camera, and i can open it in my application using Plugin.Media
and MCS
works perfectly, code:
private async void btnPick_Clicked(object sender, EventArgs e)
{
await CrossMedia.Current.Initialize();
var file = await CrossMedia.Current.PickPhotoAsync(new PickMediaOptions());
Image = ImageSource.FromStream(() => file.GetStream());
var result = client.RecognizeTextAsync(file.GetStream()).Result;
var words = from r in result.Regions
from l in r.Lines
from w in l.Words
select w.Text;
OutputText = string.Join(" ", words.ToArray());
await Navigation.PushAsync(new TextFromPhoto(OutputText, Image));
}
2) When i'm taking a photo using my application by using this code:
private async void btnTake_Clicked(object sender, EventArgs e)
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
await DisplayAlert("No Camera", ":( No camera available.", "OK");
return;
}
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions
{
SaveToAlbum = true
});
Image = ImageSource.FromStream(() => file.GetStream());
var myStream = file.GetStream();
var result = client.RecognizeTextAsync(myStream).Result;
var words = from r in result.Regions
from l in r.Lines
from w in l.Words
select w.Text;
OutputText = string.Join(" ", words.ToArray());
await Navigation.PushAsync(new TextFromPhoto(OutputText, Image));
}
Application goes to break mode and the break starts on this line:
var result = client.RecognizeTextAsync(myStream).Result;
This line worked in the previous method
Here is androidmanifest.xml
:
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="1" android:versionName="1.0" package="com.companyname.OCRScannerForms.Android" android:installLocation="auto">
<uses-sdk android:minSdkVersion="21" android:targetSdkVersion="27" />
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<application android:label="OCRScannerForms.Android">
<provider android:name="android.support.v4.content.FileProvider" android:authorities="${applicationId}.fileprovider" android:exported="false" android:grantUriPermissions="true">
<meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/file_paths"></meta-data>
</provider>
</application>
</manifest>
And here is Resources/xml/file_paths.xml
:
<?xml version="1.0" encoding="utf-8"?>
<paths xmlns:android="http://schemas.android.com/apk/res/android">
<external-files-path name="my_images" path="Pictures" />
<external-files-path name="my_movies" path="Movies" />
</paths>
Interesting is a fact i cant open in my application, photo taken earlier in my application.
I suspect that problem is with the photo path but i can't repair this
Upvotes: 0
Views: 192
Reputation: 2168
You should use new Computer Vision API
instead of Microsoft.ProjectOxford.Vision
For example:
private async void btnTake_Clicked(object sender, EventArgs e)
{
await CrossMedia.Current.Initialize();
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
await DisplayAlert("No Camera", ":( No camera available.", "OK");
return;
}
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions
{
SaveToAlbum = true,
PhotoSize = PhotoSize.Small
});
var Image = ImageSource.FromStream(() => file.GetStream());
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
// Request parameters
queryString["mode"] = "Printed";
var uri = "https://eastus.api.cognitive.microsoft.com/vision/v2.0/recognizeText?" + queryString;
HttpResponseMessage response;
var myStream = file.GetStream();
BinaryReader binaryReader = new BinaryReader(myStream);
var byteData = binaryReader.ReadBytes((int)myStream.Length);
using (var content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
}
string operationLocation = "";
operationLocation = response.Headers.GetValues("Operation-Location").FirstOrDefault();
string contentString;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
contentString = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1);
Label1.Text = JToken.Parse(contentString).ToString();
}
And the result is:
Please check the following link for more information: https://westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/587f2c6a154055056008f200
Upvotes: 1