Trigger notification after Computer Vision OCR extraction is complete - azure

I am exploring Microsoft Computer Vision's Read API (asyncBatchAnalyze) for extracting text from images. I found some sample code on Microsoft site to extract text from images asynchronously.It works in following way:
1) Submit image to asyncBatchAnalyze API.
2) This API accepts the request and returns a URI.
3) We need to poll this URI to get the extracted data.
Is there any way in which we can trigger some notification (like publishing an notification in AWS SQS or similar service) when asyncBatchAnalyze is done with image analysis?
public class MicrosoftOCRAsyncReadText {
private static final String SUBSCRIPTION_KEY = “key”;
private static final String ENDPOINT = "https://computervision.cognitiveservices.azure.com";
private static final String URI_BASE = ENDPOINT + "/vision/v2.1/read/core/asyncBatchAnalyze";
public static void main(String[] args) {
CloseableHttpClient httpTextClient = HttpClientBuilder.create().build();
CloseableHttpClient httpResultClient = HttpClientBuilder.create().build();;
try {
URIBuilder builder = new URIBuilder(URI_BASE);
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
String image = "/Users/xxxxx/Documents/img1.jpg";
File file = new File(image);
FileEntity reqEntity = new FileEntity(file);
request.setEntity(reqEntity);
HttpResponse response = httpTextClient.execute(request);
if (response.getStatusLine().getStatusCode() != 202) {
HttpEntity entity = response.getEntity();
String jsonString = EntityUtils.toString(entity);
JSONObject json = new JSONObject(jsonString);
System.out.println("Error:\n");
System.out.println(json.toString(2));
return;
}
String operationLocation = null;
Header[] responseHeaders = response.getAllHeaders();
for (Header header : responseHeaders) {
if (header.getName().equals("Operation-Location")) {
operationLocation = header.getValue();
break;
}
}
if (operationLocation == null) {
System.out.println("\nError retrieving Operation-Location.\nExiting.");
System.exit(1);
}
/* Wait for asyncBatchAnalyze to complete. In place of this wait, can we trigger any notification from Computer Vision when the extract text operation is complete?
*/
Thread.sleep(5000);
// Call the second REST API method and get the response.
HttpGet resultRequest = new HttpGet(operationLocation);
resultRequest.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
HttpResponse resultResponse = httpResultClient.execute(resultRequest);
HttpEntity responseEntity = resultResponse.getEntity();
if (responseEntity != null) {
String jsonString = EntityUtils.toString(responseEntity);
JSONObject json = new JSONObject(jsonString);
System.out.println(json.toString(2));
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}

There is no notification / webhook mechanism on those asynchronous operations.
The only thing that I can see right know is to change the implementation you mentioned by using a while condition which is checking regularly if the result is there or not (and a mechanism to cancel waiting - based on maximum waiting time or number of retries).
See sample in Microsoft docs here, especially this part:
// If the first REST API method completes successfully, the second
// REST API method retrieves the text written in the image.
//
// Note: The response may not be immediately available. Text
// recognition is an asynchronous operation that can take a variable
// amount of time depending on the length of the text.
// You may need to wait or retry this operation.
//
// This example checks once per second for ten seconds.
string contentString;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
contentString = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1);
if (i == 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1)
{
Console.WriteLine("\nTimeout error.\n");
return;
}
// Display the JSON response.
Console.WriteLine("\nResponse:\n\n{0}\n",
JToken.Parse(contentString).ToString());

Related

Azure cognitive Service - Speech sample code failed with authentication error

The program returns: CANCELED: Reason=Error ErrorDetails=WebSocket Upgrade failed with an authentication error (401). Please check for correct subscription key (or authorization token) and region name. SessionId: cbfcdf7f26304343a08de6c398652053
I'm using my free trial subscription key and westus region. This is the sample code found here: https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/quickstarts/speech-to-text-from-microphone?tabs=unity%2Cx-android%2Clinux%2Cjava-runtime&pivots=programming-language-csharp
using UnityEngine;
using UnityEngine.UI;
using Microsoft.CognitiveServices.Speech;
#if PLATFORM_ANDROID
using UnityEngine.Android;
#endif
#if PLATFORM_IOS
using UnityEngine.iOS;
using System.Collections;
#endif
public class Helloworld : MonoBehaviour
{
// Hook up the two properties below with a Text and Button object in your UI.
public Text outputText;
public Button startRecoButton;
private object threadLocker = new object();
private bool waitingForReco;
private string message;
private bool micPermissionGranted = false;
#if PLATFORM_ANDROID || PLATFORM_IOS
// Required to manifest microphone permission, cf.
// https://docs.unity3d.com/Manual/android-manifest.html
private Microphone mic;
#endif
public async void ButtonClick()
{
// Creates an instance of a speech config with specified subscription key and service region.
// Replace with your own subscription key and service region (e.g., "westus").
var config = SpeechConfig.FromSubscription("yourSubscriptionKey", "yourRegion");
// Make sure to dispose the recognizer after use!
using (var recognizer = new SpeechRecognizer(config))
{
lock (threadLocker)
{
waitingForReco = true;
}
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Checks result.
string newMessage = string.Empty;
if (result.Reason == ResultReason.RecognizedSpeech)
{
newMessage = result.Text;
}
else if (result.Reason == ResultReason.NoMatch)
{
newMessage = "NOMATCH: Speech could not be recognized.";
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(result);
newMessage = $"CANCELED: Reason={cancellation.Reason} ErrorDetails={cancellation.ErrorDetails}";
}
lock (threadLocker)
{
message = newMessage;
waitingForReco = false;
}
}
}
void Start()
{
if (outputText == null)
{
UnityEngine.Debug.LogError("outputText property is null! Assign a UI Text element to it.");
}
else if (startRecoButton == null)
{
message = "startRecoButton property is null! Assign a UI Button to it.";
UnityEngine.Debug.LogError(message);
}
else
{
// Continue with normal initialization, Text and Button objects are present.
#if PLATFORM_ANDROID
// Request to use the microphone, cf.
// https://docs.unity3d.com/Manual/android-RequestingPermissions.html
message = "Waiting for mic permission";
if (!Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
Permission.RequestUserPermission(Permission.Microphone);
}
#elif PLATFORM_IOS
if (!Application.HasUserAuthorization(UserAuthorization.Microphone))
{
Application.RequestUserAuthorization(UserAuthorization.Microphone);
}
#else
micPermissionGranted = true;
message = "Click button to recognize speech";
#endif
startRecoButton.onClick.AddListener(ButtonClick);
}
}
void Update()
{
#if PLATFORM_ANDROID
if (!micPermissionGranted && Permission.HasUserAuthorizedPermission(Permission.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#elif PLATFORM_IOS
if (!micPermissionGranted && Application.HasUserAuthorization(UserAuthorization.Microphone))
{
micPermissionGranted = true;
message = "Click button to recognize speech";
}
#endif
lock (threadLocker)
{
if (startRecoButton != null)
{
startRecoButton.interactable = !waitingForReco && micPermissionGranted;
}
if (outputText != null)
{
outputText.text = message;
}
}
}
}
The sample code you pasted above still has the placeholder values for region and subscription key. Just double checking that you did in fact replace those strings with your own subscription key and region? If that's true, can you please turn on logging, run the code again, and then provide the log? We can help diagnose from there...
To turn on logging, see https://aka.ms/speech/logging.

JHipster: Long running REST service

I have used JHipster to create a microservices application. One of my microservices has a REST api that takes a long time to return resource back to the angular client. Due to this, the angular client doesn't wait for the response and simply does nothing.
Is there a property to set either on the angular side or on the server side to increase the timeout so that the front end "entity" $resource call waits till the backend REST service returns data?
Update to provide additional information:
In my REST method I commented out the actual DB call to fetch data from database. I have added a Sleep of 5 seconds and then sending a dummy object back to the angular front end. If the Sleep is <= 4 seconds then I receive the dummy data at the front end. If it's more than 4 seconds then the front end just doesn't receive anything (or doesn't wait for the REST response) ...
Not sure if there's any setting to tell angular to wait till the REST service responds back no matter what ..??
Here's my REST method ...
#GetMapping("/history-report-mms")
#Timed
public List<HashMap<String, Object>> getAllHistoryReportMMS() {
log.debug("REST request to get all HistoryReportMMS");
try {
Thread.sleep(4000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
List<HashMap<String, Object>> list = new ArrayList<HashMap<String, Object>>();
HashMap<String, Object> map = new HashMap<String, Object>();
map.put("foo", "bar");
list.add(map);
return list;
//return historyReportMMSService.findAll();
}
Here's my front end code ...
(function() {
'use strict';
angular
.module('mmsFrontEndGatewayApp')
.controller('HistoryReportMMSController', HistoryReportMMSController);
HistoryReportMMSController.$inject = ['HistoryReportMMS', 'HistoryReportMMSSearch'];
function HistoryReportMMSController(HistoryReportMMS, HistoryReportMMSSearch) {
var vm = this;
vm.historyReportMMS = [];
vm.clear = clear;
vm.search = search;
vm.loadAll = loadAll;
loadAll();
function loadAll() {
HistoryReportMMS.query(function(result) {
vm.historyReportMMS = result;
vm.searchQuery = null;
});
}
function search() {
if (!vm.searchQuery) {
return vm.loadAll();
}
HistoryReportMMSSearch.query({query: vm.searchQuery}, function(result) {
vm.historyReportMMS = result;
vm.currentSearch = vm.searchQuery;
});
}
function clear() {
vm.searchQuery = null;
loadAll();
} }})();

Azure WebJob Scale Out only 2 Jobs working

I got a small WebJob running in 3 instances, the WebJob is triggered by ServiceBusTrigger, each job takes about 20 seconds (I added a sleep for testing).
Now i add 3 items to the ServiceBus Queue but only 2 WebJob Instances are working.
What is the third instance doing and how can i get the instance to also work on the queue?
My code is very basic:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([ServiceBusTrigger("jobs2")] string message, TextWriter log)
{
string url = "https://requestb.in/xxxxx";
log.WriteLine(message);
log.WriteLine("gotmsg");
Thread.Sleep(20000);
log.WriteLine("sending");
string postData = "test=" + message;
Console.WriteLine(postData);
System.Net.WebRequest req = System.Net.WebRequest.Create(url);
//Add these, as we're doing a POST
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
//We need to count how many bytes we're sending. Post'ed Faked Forms should be name=value&
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(postData);
req.ContentLength = bytes.Length;
System.IO.Stream os = req.GetRequestStream();
os.Write(bytes, 0, bytes.Length); //Push it out there
os.Close();
System.Net.WebResponse resp = req.GetResponse();
if (resp == null) return;
System.IO.StreamReader sr = new System.IO.StreamReader(resp.GetResponseStream());
log.WriteLine(sr.ReadToEnd().Trim());
}
}
What is the third instance doing and how can i get the instance to also work on the queue?
The third instance shoud work as default. I assume Azure WebApp uses the specified LoadBalance strategy for multiple instances. And it seems that we have no way to config the LoadBalance strategy. In you case it seems that 3 message is not enough for test that. Please have a try to test it with more message. And I test it on my side, it works correctly. The following is the test code for sending message I used.
QueueClient.CreateFromConnectionString("connection string", QueueName);
for (int i = 0; i < 20; i++)
{
var sendMessage = new BrokeredMessage("test message"+i);
client.Send(sendMessage);
}

Servicestack RabbitMQ: Infinite loop fills up dead-letter-queue when RabbitMqProducer cannot redeclare temporary queue in RPC-pattern

When I declare a temporary reply queue to be exclusive (e.g. anonymous queue (exclusive=true, autodelete=true) in rpc-pattern), the response message cannot be posted to the specified reply queue (e.g. message.replyTo="amq.gen-Jg_tv8QYxtEQhq0tF30vAA") because RabbitMqProducer.PublishMessage() tries to redeclare the queue with different parameters (exclusive=false), which understandably results in an error.
Unfortunately, the erroneous call to channel.RegisterQueue(queueName) in RabbitMqProducer.PublishMessage() seems to nack the request message in the incoming queue so that, when ServiceStack.Messaging.MessageHandler.DefaultInExceptionHandler tries to acknowlege the request message (to remove it from the incoming queue), the message just stays on top of the incoming queue and gets processed all over again. This procedure repeats indefinitely and results in one dlq-message per iteration which slowly fills up the dlq.
I am wondering,
if ServiceStack handles the case, when ServiceStack.RabbitMq.RabbitMqProducer cannot declare the response queue, correctly
if ServiceStack.RabbitMq.RabbitMqProducer muss always declare the response queue before publishing the response
if it wouldn't be best to have some configuration flag to omit all exchange and queue declaration calls (outside of the first initialization). The RabbitMqProducer would just assume every queue/exchange to be properly set up and just publish the message.
(At the moment our client just declares its response queue to be exclusive=false and everything works fine. But I'd really like to use rabbitmq's built-in temporary queues.)
MQ-Client Code, requires simple "SayHello" service:
const string INQ_QUEUE_NAME = "mq:SayHello.inq";
const string EXCHANGE_NAME="mx.servicestack";
var factory = new ConnectionFactory() { HostName = "192.168.179.110" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// Create temporary queue and setup bindings
// this works (because "mq:tmp:" stops RabbitMqProducer from redeclaring response queue)
string responseQueueName = "mq:tmp:SayHello_" + Guid.NewGuid().ToString() + ".inq";
channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work (RabbitMqProducer tries to declare queue again => error):
//string responseQueueName = Guid.NewGuid().ToString() + ".inq";
//channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work either (RabbitMqProducer tries to declare queue again => error)
//var responseQueueName = channel.QueueDeclare().QueueName;
// publish simple SayHello-Request to standard servicestack exchange ("mx.servicestack") with routing key "mq:SayHello.inq":
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
channel.BasicPublish(EXCHANGE_NAME, INQ_QUEUE_NAME, props, Encoding.UTF8.GetBytes("{\"ToName\": \"Chris\"}"));
// consume response from response queue
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
// print result: should be "Hello, Chris!"
Console.WriteLine(Encoding.UTF8.GetString(ea.Body));
}
}
Everything seems to work fine when RabbitMqProducer does not try to declare the queues, like that:
public void PublishMessage(string exchange, string routingKey, IBasicProperties basicProperties, byte[] body)
{
const bool MustDeclareQueue = false; // new config parameter??
try
{
if (MustDeclareQueue && !Queues.Contains(routingKey))
{
Channel.RegisterQueueByName(routingKey);
Queues = new HashSet<string>(Queues) { routingKey };
}
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
catch (OperationInterruptedException ex)
{
if (ex.Is404())
{
Channel.RegisterExchangeByName(exchange);
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
throw;
}
}
The issue got adressed in servicestack's version v4.0.32 (fixed in this commit).
The RabbitMqProducer no longer tries to redeclare temporary queues and instead assumes that the reply queue already exist (which solves my problem.)
(The underlying cause of the infinite loop (wrong error handling while publishing response message) probably still exists.)
Edit: Example
The following basic mq-client (which does not use ServiceStackmq client and instead depends directly on rabbitmq's .net-library; it uses ServiceStack.Text for serialization though) can perform generic RPCs:
public class MqClient : IDisposable
{
ConnectionFactory factory = new ConnectionFactory()
{
HostName = "192.168.97.201",
UserName = "guest",
Password = "guest",
//VirtualHost = "test",
Port = AmqpTcpEndpoint.UseDefaultPort,
};
private IConnection connection;
private string exchangeName;
public MqClient(string defaultExchange)
{
this.exchangeName = defaultExchange;
this.connection = factory.CreateConnection();
}
public TResponse RpcCall<TResponse>(IReturn<TResponse> reqDto, string exchange = null)
{
using (var channel = connection.CreateModel())
{
string inq_queue_name = string.Format("mq:{0}.inq", reqDto.GetType().Name);
string responseQueueName = channel.QueueDeclare().QueueName;
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
var message = ServiceStack.Text.JsonSerializer.SerializeToString(reqDto);
channel.BasicPublish(exchange ?? this.exchangeName, inq_queue_name, props, UTF8Encoding.UTF8.GetBytes(message));
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//channel.BasicAck(ea.DeliveryTag, false);
string response = UTF8Encoding.UTF8.GetString(ea.Body);
string responseType = ea.BasicProperties.Type;
Console.WriteLine(" [x] New Message of Type '{1}' Received:{2}{0}", response, responseType, Environment.NewLine);
return ServiceStack.Text.JsonSerializer.DeserializeFromString<TResponse>(response);
}
}
~MqClient()
{
this.Dispose();
}
public void Dispose()
{
if (connection != null)
{
this.connection.Dispose();
this.connection = null;
}
}
}
Key points:
client declares anonymous queue (=with empty queue name) channel.QueueDeclare()
server generates queue and returns queue name (amq.gen*)
client adds queue name to message properties (props.ReplyTo = responseQueueName;)
ServiceStack automatically sends response to temporary queue
client picks up response and deserializes
It can be used like that:
using (var mqClient = new MqClient("mx.servicestack"))
{
var pingResponse = mqClient.RpcCall<PingResponse>(new Ping { });
}
Important: You've got to use servicestack version 4.0.32+.

How to call PUT method from Web Api using HttpClient?

I want to call Api function (1st) . from 2nd Api function using HttpClient. But I always get 404 Error.
1st Api Function (EndPoint : http : // localhost : xxxxx /api/Test/)
public HttpResponseMessage Put(int id, int accountId, byte[] content)
[...]
2nd Api function
public HttpResponseMessage Put(int id, int aid, byte[] filecontent)
{
WebRequestHandler handler = new WebRequestHandler()
{
AllowAutoRedirect = false,
UseProxy = false
};
using (HttpClient client = new HttpClient(handler))
{
client.BaseAddress = new Uri("http://localhost:xxxxx/");
// Add an Accept header for JSON format.
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var param = new object[6];
param[0] = id;
param[1] = "/";
param[2] = "?aid=";
param[3] = aid;
param[4] = "&content=";
param[5] = filecontent;
using (HttpResponseMessage response = client.PutAsJsonAsync("api/Test/", param).Result)
{
return response.EnsureSuccessStatusCode();
}
}
}
So My question is that. Can I post Method Parameter as an object array from HttpClient as I did ? I don't want to Pass model as method parameter.
What is the wrong in my code ?
Unable to get any response , after change code to
return client.PutAsJsonAsync(uri, filecontent)
.ContinueWith<HttpResponseMessage>
(
task => task.Result.EnsureSuccessStatusCode()
);
OR
return client.PutAsJsonAsync(uri, filecontent)
.ContinueWith
(
task => task.Result.EnsureSuccessStatusCode()
);
As you probably found out, no you can't. When you call PostAsJsonAsync, the code will convert the parameter to JSON and send it in the request body. Your parameter is a JSON array which will look something like the array below:
[1,"/","?aid",345,"&content=","aGVsbG8gd29ybGQ="]
Which isn't what the first function is expecting (at least that's what I imagine, since you haven't showed the route info). There are a couple of problems here:
By default, parameters of type byte[] (reference types) are passed in the body of the request, not in the URI (unless you explicitly tag the parameter with the [FromUri] attribute).
The other parameters (again, based on my guess about your route) need to be part of the URI, not the body.
The code would look something like this:
var uri = "api/Test/" + id + "/?aid=" + aid;
using (HttpResponseMessage response = client.PutAsJsonAsync(uri, filecontent).Result)
{
return response.EnsureSuccessStatusCode();
}
Now, there's another potential issue with the code above. It's waiting on the network response (that's what happens when you access the .Result property in the Task<HttpResponseMessage> returned by PostAsJsonAsync. Depending on the environment, the worse that can happen is that it may deadlock (waiting on a thread in which the network response will arrive). In the best case this thread will be blocked for the duration of the network call, which is also bad. Consider using the asynchronous mode (awaiting the result, returning a Task<T> in your action) instead, like in the example below
public async Task<HttpResponseMessage> Put(int id, int aid, byte[] filecontent)
{
// ...
var uri = "api/Test/" + id + "/?aid=" + aid;
HttpResponseMessage response = await client.PutAsJsonAsync(uri, filecontent);
return response.EnsureSuccessStatusCode();
}
Or without the async / await keywords:
public Task<HttpResponseMessage> Put(int id, int aid, byte[] filecontent)
{
// ...
var uri = "api/Test/" + id + "/?aid=" + aid;
return client.PutAsJsonAsync(uri, filecontent).ContinueWith<HttpResponseMessage>(
task => task.Result.EnsureSuccessStatusCode());
}

Resources