Azure WebJob Scale Out only 2 Jobs working - azure

I got a small WebJob running in 3 instances, the WebJob is triggered by ServiceBusTrigger, each job takes about 20 seconds (I added a sleep for testing).
Now i add 3 items to the ServiceBus Queue but only 2 WebJob Instances are working.
What is the third instance doing and how can i get the instance to also work on the queue?
My code is very basic:
public class Functions
{
// This function will get triggered/executed when a new message is written
// on an Azure Queue called queue.
public static void ProcessQueueMessage([ServiceBusTrigger("jobs2")] string message, TextWriter log)
{
string url = "https://requestb.in/xxxxx";
log.WriteLine(message);
log.WriteLine("gotmsg");
Thread.Sleep(20000);
log.WriteLine("sending");
string postData = "test=" + message;
Console.WriteLine(postData);
System.Net.WebRequest req = System.Net.WebRequest.Create(url);
//Add these, as we're doing a POST
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
//We need to count how many bytes we're sending. Post'ed Faked Forms should be name=value&
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(postData);
req.ContentLength = bytes.Length;
System.IO.Stream os = req.GetRequestStream();
os.Write(bytes, 0, bytes.Length); //Push it out there
os.Close();
System.Net.WebResponse resp = req.GetResponse();
if (resp == null) return;
System.IO.StreamReader sr = new System.IO.StreamReader(resp.GetResponseStream());
log.WriteLine(sr.ReadToEnd().Trim());
}
}

What is the third instance doing and how can i get the instance to also work on the queue?
The third instance shoud work as default. I assume Azure WebApp uses the specified LoadBalance strategy for multiple instances. And it seems that we have no way to config the LoadBalance strategy. In you case it seems that 3 message is not enough for test that. Please have a try to test it with more message. And I test it on my side, it works correctly. The following is the test code for sending message I used.
QueueClient.CreateFromConnectionString("connection string", QueueName);
for (int i = 0; i < 20; i++)
{
var sendMessage = new BrokeredMessage("test message"+i);
client.Send(sendMessage);
}

Related

Azure Service Bus SendMessageAsync method terminates and crashes whole program

I created a .NET core 6 project. I added Azure.Messaging.ServiceBus as the dependency. I am using below code to send message to service bus topic.
// See https://aka.ms/new-console-template for more information
using Azure.Messaging.ServiceBus;
using System.Dynamic;
using System.Net;
using System.Text;
using System.Text.Json;
Console.WriteLine("Hello, World!");
Sender t = new Sender();
Sender.Send();
class Sender
{
public static async Task Send()
{
string connectionString = "Endpoint=sb://sb-test-one.servicebus.windows.net/;SharedAccessKeyName=manage;SharedAccessKey=8e+6SWp3skB3AeDlwH6ufGEainEs45353435JzDywz5DU=;";
string topicName = "topicone";
string subscriptionName = "subone";
// The Service Bus client types are safe to cache and use as a singleton for the lifetime
try
{
await using var client = new ServiceBusClient(connectionString, new ServiceBusClientOptions
{
TransportType = ServiceBusTransportType.AmqpWebSockets
});
// create the sender
ServiceBusSender sender = client.CreateSender(topicName);
dynamic data = new ExpandoObject();
data.name = "Abc";
data.age = 6;
// create a message that we can send. UTF-8 encoding is used when providing a string.
var messageBody = JsonSerializer.Serialize(data);
ServiceBusMessage message = new ServiceBusMessage(messageBody);
// send the message
await sender.SendMessageAsync(message);
var s = 10;
}
catch (Exception e)
{
var v = 10;
}
//// create a receiver for our subscription that we can use to receive the message
//ServiceBusReceiver receiver = client.CreateReceiver(topicName, subscriptionName);
//// the received message is a different type as it contains some service set properties
//ServiceBusReceivedMessage receivedMessage = await receiver.ReceiveMessageAsync();
//// get the message body as a string
//string body = receivedMessage.Body.ToString();
//Console.WriteLine(body);
Console.WriteLine("Press any key to end the application");
Console.ReadKey();
}
}
Issue: When I call await sender.SendMessageAsync(message); after this line get executed, the program is actually terminating. It not awating. The whole execution stops after this line.
System is not throwing any exception and service bus is not receiving any message.
I just noticed that all other samples I saw had a default SharedAccessPolicy called RootManageSharedAccessKey policy available by default in the azure portal. For me, I had to create this policy. To my policy I have given Manage, Send, ReceiveAccess.
Needed to change Sender.Send(); to Sender.Send().GetAwaiter().GetResult();

Trigger notification after Computer Vision OCR extraction is complete

I am exploring Microsoft Computer Vision's Read API (asyncBatchAnalyze) for extracting text from images. I found some sample code on Microsoft site to extract text from images asynchronously.It works in following way:
1) Submit image to asyncBatchAnalyze API.
2) This API accepts the request and returns a URI.
3) We need to poll this URI to get the extracted data.
Is there any way in which we can trigger some notification (like publishing an notification in AWS SQS or similar service) when asyncBatchAnalyze is done with image analysis?
public class MicrosoftOCRAsyncReadText {
private static final String SUBSCRIPTION_KEY = “key”;
private static final String ENDPOINT = "https://computervision.cognitiveservices.azure.com";
private static final String URI_BASE = ENDPOINT + "/vision/v2.1/read/core/asyncBatchAnalyze";
public static void main(String[] args) {
CloseableHttpClient httpTextClient = HttpClientBuilder.create().build();
CloseableHttpClient httpResultClient = HttpClientBuilder.create().build();;
try {
URIBuilder builder = new URIBuilder(URI_BASE);
URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
String image = "/Users/xxxxx/Documents/img1.jpg";
File file = new File(image);
FileEntity reqEntity = new FileEntity(file);
request.setEntity(reqEntity);
HttpResponse response = httpTextClient.execute(request);
if (response.getStatusLine().getStatusCode() != 202) {
HttpEntity entity = response.getEntity();
String jsonString = EntityUtils.toString(entity);
JSONObject json = new JSONObject(jsonString);
System.out.println("Error:\n");
System.out.println(json.toString(2));
return;
}
String operationLocation = null;
Header[] responseHeaders = response.getAllHeaders();
for (Header header : responseHeaders) {
if (header.getName().equals("Operation-Location")) {
operationLocation = header.getValue();
break;
}
}
if (operationLocation == null) {
System.out.println("\nError retrieving Operation-Location.\nExiting.");
System.exit(1);
}
/* Wait for asyncBatchAnalyze to complete. In place of this wait, can we trigger any notification from Computer Vision when the extract text operation is complete?
*/
Thread.sleep(5000);
// Call the second REST API method and get the response.
HttpGet resultRequest = new HttpGet(operationLocation);
resultRequest.setHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
HttpResponse resultResponse = httpResultClient.execute(resultRequest);
HttpEntity responseEntity = resultResponse.getEntity();
if (responseEntity != null) {
String jsonString = EntityUtils.toString(responseEntity);
JSONObject json = new JSONObject(jsonString);
System.out.println(json.toString(2));
}
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
There is no notification / webhook mechanism on those asynchronous operations.
The only thing that I can see right know is to change the implementation you mentioned by using a while condition which is checking regularly if the result is there or not (and a mechanism to cancel waiting - based on maximum waiting time or number of retries).
See sample in Microsoft docs here, especially this part:
// If the first REST API method completes successfully, the second
// REST API method retrieves the text written in the image.
//
// Note: The response may not be immediately available. Text
// recognition is an asynchronous operation that can take a variable
// amount of time depending on the length of the text.
// You may need to wait or retry this operation.
//
// This example checks once per second for ten seconds.
string contentString;
int i = 0;
do
{
System.Threading.Thread.Sleep(1000);
response = await client.GetAsync(operationLocation);
contentString = await response.Content.ReadAsStringAsync();
++i;
}
while (i < 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1);
if (i == 10 && contentString.IndexOf("\"status\":\"Succeeded\"") == -1)
{
Console.WriteLine("\nTimeout error.\n");
return;
}
// Display the JSON response.
Console.WriteLine("\nResponse:\n\n{0}\n",
JToken.Parse(contentString).ToString());

Best way to asynchronously receive all brokered messages from the service bus

Here is what I am trying to achieve :
On the service bus I have a topic which contains 5005 messages.
I need to peek all the messages without completing them and add them to a list (List<BrokeredMessage>)
Here is what I am trying :
IEnumerable<BrokeredMessage> dlIE = null;
List<BrokeredMessage> bmList = new List<BrokeredMessage>();
long i = 0;
while (i < count) //count is the total messages in the subscription
{
dlIE = deadLetterClient.ReceiveBatch(100);
bmList.AddRange(dlIE);
i = i + dlIE.Count();
}
In the above code I can only fetch 100 messages at a time since there is a batch size limit to retrieving the messages.
I have also tried to do asynchronously but it always returns 0 messages in the list. This is the code for that:
static List<BrokeredMessage> messageList = new List<BrokeredMessage>();
long i = 0;
while (i < count)
{
var task = ReceiveMessagesBatchForSubscription(deadLetterClient);
i = i + 100;
}
Task.WaitAny();
public async static Task ReceiveMessagesBatchForSubscription(SubscriptionClient deadLetterClient)
{
while (true)
{
var receivedMessage = await deadLetterClient.ReceiveBatchAsync(100);
messageList.AddRange(receivedMessage);
}
}
Can anyone please suggest a better way to do this?

Servicestack RabbitMQ: Infinite loop fills up dead-letter-queue when RabbitMqProducer cannot redeclare temporary queue in RPC-pattern

When I declare a temporary reply queue to be exclusive (e.g. anonymous queue (exclusive=true, autodelete=true) in rpc-pattern), the response message cannot be posted to the specified reply queue (e.g. message.replyTo="amq.gen-Jg_tv8QYxtEQhq0tF30vAA") because RabbitMqProducer.PublishMessage() tries to redeclare the queue with different parameters (exclusive=false), which understandably results in an error.
Unfortunately, the erroneous call to channel.RegisterQueue(queueName) in RabbitMqProducer.PublishMessage() seems to nack the request message in the incoming queue so that, when ServiceStack.Messaging.MessageHandler.DefaultInExceptionHandler tries to acknowlege the request message (to remove it from the incoming queue), the message just stays on top of the incoming queue and gets processed all over again. This procedure repeats indefinitely and results in one dlq-message per iteration which slowly fills up the dlq.
I am wondering,
if ServiceStack handles the case, when ServiceStack.RabbitMq.RabbitMqProducer cannot declare the response queue, correctly
if ServiceStack.RabbitMq.RabbitMqProducer muss always declare the response queue before publishing the response
if it wouldn't be best to have some configuration flag to omit all exchange and queue declaration calls (outside of the first initialization). The RabbitMqProducer would just assume every queue/exchange to be properly set up and just publish the message.
(At the moment our client just declares its response queue to be exclusive=false and everything works fine. But I'd really like to use rabbitmq's built-in temporary queues.)
MQ-Client Code, requires simple "SayHello" service:
const string INQ_QUEUE_NAME = "mq:SayHello.inq";
const string EXCHANGE_NAME="mx.servicestack";
var factory = new ConnectionFactory() { HostName = "192.168.179.110" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// Create temporary queue and setup bindings
// this works (because "mq:tmp:" stops RabbitMqProducer from redeclaring response queue)
string responseQueueName = "mq:tmp:SayHello_" + Guid.NewGuid().ToString() + ".inq";
channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work (RabbitMqProducer tries to declare queue again => error):
//string responseQueueName = Guid.NewGuid().ToString() + ".inq";
//channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work either (RabbitMqProducer tries to declare queue again => error)
//var responseQueueName = channel.QueueDeclare().QueueName;
// publish simple SayHello-Request to standard servicestack exchange ("mx.servicestack") with routing key "mq:SayHello.inq":
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
channel.BasicPublish(EXCHANGE_NAME, INQ_QUEUE_NAME, props, Encoding.UTF8.GetBytes("{\"ToName\": \"Chris\"}"));
// consume response from response queue
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
// print result: should be "Hello, Chris!"
Console.WriteLine(Encoding.UTF8.GetString(ea.Body));
}
}
Everything seems to work fine when RabbitMqProducer does not try to declare the queues, like that:
public void PublishMessage(string exchange, string routingKey, IBasicProperties basicProperties, byte[] body)
{
const bool MustDeclareQueue = false; // new config parameter??
try
{
if (MustDeclareQueue && !Queues.Contains(routingKey))
{
Channel.RegisterQueueByName(routingKey);
Queues = new HashSet<string>(Queues) { routingKey };
}
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
catch (OperationInterruptedException ex)
{
if (ex.Is404())
{
Channel.RegisterExchangeByName(exchange);
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
throw;
}
}
The issue got adressed in servicestack's version v4.0.32 (fixed in this commit).
The RabbitMqProducer no longer tries to redeclare temporary queues and instead assumes that the reply queue already exist (which solves my problem.)
(The underlying cause of the infinite loop (wrong error handling while publishing response message) probably still exists.)
Edit: Example
The following basic mq-client (which does not use ServiceStackmq client and instead depends directly on rabbitmq's .net-library; it uses ServiceStack.Text for serialization though) can perform generic RPCs:
public class MqClient : IDisposable
{
ConnectionFactory factory = new ConnectionFactory()
{
HostName = "192.168.97.201",
UserName = "guest",
Password = "guest",
//VirtualHost = "test",
Port = AmqpTcpEndpoint.UseDefaultPort,
};
private IConnection connection;
private string exchangeName;
public MqClient(string defaultExchange)
{
this.exchangeName = defaultExchange;
this.connection = factory.CreateConnection();
}
public TResponse RpcCall<TResponse>(IReturn<TResponse> reqDto, string exchange = null)
{
using (var channel = connection.CreateModel())
{
string inq_queue_name = string.Format("mq:{0}.inq", reqDto.GetType().Name);
string responseQueueName = channel.QueueDeclare().QueueName;
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
var message = ServiceStack.Text.JsonSerializer.SerializeToString(reqDto);
channel.BasicPublish(exchange ?? this.exchangeName, inq_queue_name, props, UTF8Encoding.UTF8.GetBytes(message));
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//channel.BasicAck(ea.DeliveryTag, false);
string response = UTF8Encoding.UTF8.GetString(ea.Body);
string responseType = ea.BasicProperties.Type;
Console.WriteLine(" [x] New Message of Type '{1}' Received:{2}{0}", response, responseType, Environment.NewLine);
return ServiceStack.Text.JsonSerializer.DeserializeFromString<TResponse>(response);
}
}
~MqClient()
{
this.Dispose();
}
public void Dispose()
{
if (connection != null)
{
this.connection.Dispose();
this.connection = null;
}
}
}
Key points:
client declares anonymous queue (=with empty queue name) channel.QueueDeclare()
server generates queue and returns queue name (amq.gen*)
client adds queue name to message properties (props.ReplyTo = responseQueueName;)
ServiceStack automatically sends response to temporary queue
client picks up response and deserializes
It can be used like that:
using (var mqClient = new MqClient("mx.servicestack"))
{
var pingResponse = mqClient.RpcCall<PingResponse>(new Ping { });
}
Important: You've got to use servicestack version 4.0.32+.

.NET 4.0 BackgroundWorker class and unusual behavior

I am running into some strange behavior in the backgroundworker class that leads me to believe that I don't fully understand how it works. I assumed that the following code sections were more or less equal except for some extra features that the BackgroundWorker implements (like progress reporting, etc.):
section 1:
void StartSeparateThread(){
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += new DoWorkEventHandler(bw_DoWork);
bw.RunWorkerAsync();
}
void bw_DoWork(object sender, DoWorkEventArgs e)
{
//Execute some code asynchronous to the thread that owns the function
//StartSeparateThread() but synchronous to itself.
var SendCommand = "SomeCommandToSend";
var toWaitFor = new List<string>(){"Various","Possible","Outputs to wait for"};
var SecondsToWait = 30;
//this calls a function that sends the command over the NetworkStream and waits
//for various responses.
var Result=SendAndWaitFor(SendCommand,toWaitFor,SecondsToWait);
}
Section 2:
void StartSeparateThread(){
Thread pollThread = new Thread(new ThreadStart(DoStuff));
pollThread.Start();
}
void DoStuff(object sender, DoWorkEventArgs e)
{
//Execute some code asynchronous to the thread that owns the function
//StartSeparateThread() but synchronous to itself.
var SendCommand = "SomeCommandToSend";
var toWaitFor = new List<string>(){"Various","Possible","Outputs to wait for"};
var SecondsToWait = 30;
//this calls a function that sends the command over the NetworkStream and waits
//for various responses.
var Result=SendAndWaitFor(SendCommand,toWaitFor,SecondsToWait);
}
I was using Section 1 to run some code that sent a string over a networkstream and waited for a desired response string, capturing all output during that time. I wrote a function to do this that would return the networkstream output, the index of the the sent string, as well as the index of the desired response string. I was seeing some strange behavior with this so I changed the function to only return when both the send string and the output string were found, and that the index of the found string was greater than the index of the sent string. It would otherwise loop forever (just for testing). I would find that the function would indeed return but that the index of both strings were -1 and the output string was null or sometimes filled with the expected output of the previous call. If I were to make a guess about what was happening, it would be that external functions called from within the bw_DoWork() function are run asynchronously to the thread that owns the bw_DoWork() function. As a result, since my SendAndWaitFor() function was called multiple times in succession. the second call would be run before the first call finished, overwriting the results of the first call after they were returned but before they could be evaluated. This seems to make sense because the first call would always run correctly and successive calls would show the strange behavior described above but it seems counter intuitive to how the BackgroundWorker class should behave. Also If I were to break within the SendAndWaitFor function, things would behave properly. This again leads me to believe there is some multi-threading going on within the bwDoWork function itself.
When I change the code in the first section above to the code of the second section, things work entirely as expected. So, can anyone who understands the BackgroundWorker class explain what could be going on? Below are some related functions that may be relevant.
Thanks!
public Dictionary<string, string> SendAndWaitFor(string sendString, List<string> toWaitFor, int seconds)
{
var toReturn = new Dictionary<string, string>();
var data = new List<byte>();
var enc = new ASCIIEncoding();
var output = "";
var FoundString = "";
//wait for current buffer to clear
output = this.SynchronousRead();
while(!string.IsNullOrEmpty(output)){
output = SynchronousRead();
}
//output should be null at this point and the buffer should be clear.
//send the desired data
this.write(enc.GetBytes(sendString));
//look for all desired strings until timeout is reached
int sendIndex=-1;
int foundIndex = -1;
output += SynchronousRead();
for (DateTime start = DateTime.Now; DateTime.Now - start < new TimeSpan(0, 0, seconds); )
{
//wait for a short period to allow the buffer to fill with new data
Thread.Sleep(300);
//read the buffer and add it to the output
output += SynchronousRead();
foreach (var s in toWaitFor)
{
sendIndex = output.IndexOf(sendString);
foundIndex = output.LastIndexOf(s);
if (foundIndex>sendIndex)
{
toReturn["sendIndex"] = sendIndex.ToString();
toReturn["foundIndex"] = sendIndex.ToString();
toReturn["Output"] = output;
toReturn["FoundString"] = s;
return toReturn;
}
}
}
//Set this to loop infinitely while debuging to make sure the function was only
//returning above
while(true){
}
toReturn["sendIndex"]="";
toReturn["foundIndex"]="";
toReturn["Output"] =output;
toReturn["FoundString"] = "";
return toReturn;
}
public void write(byte[] toWrite)
{
var enc = new ASCIIEncoding();
var writeString = enc.GetString(toWrite);
var ns = connection.GetStream();
ns.Write(toWrite, 0, toWrite.Length);
}
public string SynchronousRead()
{
string toReturn = "";
ASCIIEncoding enc = new ASCIIEncoding();
var ns = connection.GetStream();
var sb = new StringBuilder();
while (ns.DataAvailable)
{
var buffer = new byte[4096];
var numberOfBytesRead = ns.Read(buffer, 0, buffer.Length);
sb.AppendFormat("{0}", Encoding.ASCII.GetString(buffer, 0, numberOfBytesRead));
toReturn += sb.ToString();
}
return toReturn;
}
All data to be used by a background worker should be passed in through the DoWorkEventArgs and nothing should be pulled off of the class (or GUI interface).
In looking at your code I could not identify where the property(?) connnection was being created. My guess is that connection is created on a different thread, or may be pulling read information, maybe from a GUI(?) and either one of those could cause problems.
I suggest that you create the connection instance in the dowork event and not pull an existing one off of a different thread. Also verify that the data connection works with does not access any info off of a GUI, but its info is passed in as its made.
I discuss an issue with the Background worker on my blog C# WPF: Linq Fails in BackgroundWorker DoWork Event which might show you where the issue lies in your code.

Resources