TrackTrace is not logging In to Application Insight - azure

I have the below code. It's Not logging to Trace.I am not sure why. If possible can you help me on this?
public static void SAPLogger(string Message)
{
TelemetryConfiguration.Active.InstrumentationKey = "XXX-XXX-XXX";
TelemetryClient TelePositive = new TelemetryClient
{
InstrumentationKey = "XXX-XXX" (Optional Value)
};
//TelePositive.TrackRequest(Req);
TelePositive.TrackTrace(Message, SeverityLevel.Verbose, new Dictionary<string, string> { { "Information", "SAP" } });
}
I am calling this method in the Main() method.
static void Main(string[] args)
{
try
{
int a = 5;
int c = a / 2;
SAPLogger("The value is Success" + c);
}
}
I am totally not sure why this is not logging. Please help

Your example app is probably exiting before your telemetry gets sent.
DeveloperMode should cause it to send immediately, however, if your process exists immediately like your test app appears to, the process might still end before the web request gets created and sent.
For short lived applications like that test app, you'll probably need a flush and a sleep call of some kind at the end to ensure telemetry has a chance to send before the app quits.
For a real application that lives for a long time, telemetry will be batched and sent after an amount of time, or number of events is met, then that batch will be sent. you app probably still would want to flush/wait at the end just to make sure any batched up telemetry gets sent.
but in either case, the flush/wait should only occur once, at the end. not with every call to track telemetry.

Related

MSMQ ARCHITECTURE WITH DEDICATED PROCESSORS PER DATABASE

I have a web application in ASP.NET MVC , C# and I have a specific use case that takes long time to process and users have to wait until the process is complete. I want to use MSMQ and relay the heavy work to dedicated MSMQ consumer/servicer. Our application has multiple clients and each client has their own SQL database. So let's say 100 clients make 100 separate SQL databases. The real challenge I have is to make the process faster using MSMQ but task of 1 client should not effect the performance of others. So I have 2 solutions:
Option-1: Unique MSMQ Private Queue per database so in my case it will be 100 queues and growing. 1 dedicated ASP.NET console application that listens to a dedicated MSMQ so in my case it will be 100 processors or console applications.
Option-2: 1 big MSMQ private queue for all databases
A: 1 dedicated MSMQ consumer per database so 100 processors
B: 1 MSMQ consumer that listens to the big MSMQ
I want to stick with Option-1 but I would want to know is this a feasible and enterprise type solution?
You actually have two questions
First, how do you allocate a resources affinity to a processor to SQL Server.
Select the database in Sql Management Studio, right click and follow this..
Clean your Database regularly
DBCC FREEPROCCACHE;
DBCC DROPCLEANBUFFERS;
MSMQ, turn on [journaling][2], but also consider another queuing process RabbitMQ etc, or write a simple one to enquque the jobs sample from here
public class MultiThreadQueue
{
BlockingCollection<string> _jobs = new BlockingCollection<string>();
public MultiThreadQueue(int numThreads)
{
for (int i = 0; i < numThreads; i++)
{
var thread = new Thread(OnHandlerStart)
{ IsBackground = true };//Mark 'false' if you want to prevent program exit until jobs finish
thread.Start();
}
}
public void Enqueue(string job)
{
if (!_jobs.IsAddingCompleted)
{
_jobs.Add(job);
}
}
public void Stop()
{
//This will cause '_jobs.GetConsumingEnumerable' to stop blocking and exit when it's empty
_jobs.CompleteAdding();
}
private void OnHandlerStart()
{
foreach (var job in _jobs.GetConsumingEnumerable(CancellationToken.None))
{
Console.WriteLine(job);
Thread.Sleep(10);
}
}
}
Hope this helps :)
The question has been reworded, he meant sometheng else when he said Processors.
Update added a consumer pattern with onPeek :
You really need to post some code!
Consider using the OnPeekCompleted method. If there is an error you can leave the message on the queue
If you have some kind of header which identifies the message you can switch to a different dedicated/thread.
private static void OnPeekCompleted(Object sourceQueue, PeekCompletedEventArgs asyncResult)
{
// Set up and connect to the queue.
MessageQueue mq = (MessageQueue)sourceQueue;
// gets a new transaction going
using (var txn = new MessageQueueTransaction())
{
try
{
// retrieve message and process
txn.Begin();
// End the asynchronous peek operation.
var message = mq.Receive(txn);
#if DEBUG
// Display message information on the screen.
if (message != null)
{
Console.WriteLine("{0}: {1}", message.Label, (string)message.Body);
}
#endif
// message will be removed on txn.Commit.
txn.Commit();
}
catch (Exception ex)
{
// If there is an error you can leave the message on the queue, don't remove message from queue
Console.WriteLine(ex.ToString());
txn.Abort();
}
}
// Restart the asynchronous peek operation.
mq.BeginPeek();
}
You can also use a service broker

Akka.net Ask timeout when used in Azure WebJob

At work we have some code in a Azure WebJob where we use Rabbit
The basic workflow is this
A message arrives on RabbitMQ Queue
We have a message handler for the incoming message
Within the message handler we start a top level (user) supervisor actor where we "ask" it to handle the message
The supervisor actor hierarchy is like this
And the relevant top level code is something like this (this is the WebJob code)
static void Main(string[] args)
{
try
{
//Bootstrap akka IoC resolver well ahead of any actor usages
new AutoFacDependencyResolver(ContainerOperations.Instance.Container, ContainerOperations.Instance.Container.Resolve<ActorSystem>());
var system = ContainerOperations.Instance.Container.Resolve<ActorSystem>();
var busQueueReader = ContainerOperations.Instance.Container.Resolve<IBusQueueReader>();
var dateTime = ContainerOperations.Instance.Container.Resolve<IDateTime>();
busQueueReader.AddHandler<ProgramCalculationMessage>("RabbitQueue", x =>
{
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
try
{
//SupervisorActor is a singleton
var supervisorActor = ContainerOperations.Instance.Container.ResolveNamed<IActorRef>("SupervisorActor");
var actorMessage = new SomeActorMessage();
var supervisorRunTask = runModelSupervisorActor.Ask(actorMessage, TimeSpan.FromMinutes(25));
//we want to wait this guy out
var supervisorRunResult = supervisorRunTask.GetAwaiter().GetResult();
switch (supervisorRunResult)
{
case CompletedEvent completed:
{
break;
}
case FailedEvent failed:
{
throw failed.Exception;
}
}
}
catch (Exception ex)
{
_log.Error(ex, "Error found in Webjob");
//throw it for the actual RabbitMqQueueReader Handler so message gets NACK
throw;
}
});
Thread.Sleep(Timeout.Infinite);
}
catch (Exception ex)
{
_log.Error(ex, "Error found");
throw;
}
}
And this is the relevant IOC code (we are using Autofac + Akka.NET DI for Autofac)
builder.RegisterType<SupervisorActor>();
_actorSystem = new Lazy<ActorSystem>(() =>
{
var akkaconf = ActorUtil.LoadConfig(_akkaConfigPath).WithFallback(ConfigurationFactory.Default());
return ActorSystem.Create("WebJobSystem", akkaconf);
});
builder.Register<ActorSystem>(cont => _actorSystem.Value);
builder.Register(cont =>
{
var system = cont.Resolve<ActorSystem>();
return system.ActorOf(system.DI().Props<SupervisorActor>(),"SupervisorActor");
})
.SingleInstance()
.Named<IActorRef>("SupervisorActor");
The problem
So the code is working fine and doing what we want it to, apart from the Akka.Net "ask" timeout shown above in the WebJob code.
Annoyingly this seems to work fine if I try and run the webjob locally. Where I can simulate a "ask" timeout by providing a new supervisorActor that simply doesn't EVER respond with a message back to the "Sender".
This works perfectly running on my machine, but when we run this code in Azure, we DO NOT see a Timeout for the "ask" even though one of our workflow runs exceeded the "ask" timeout by a mile.
I just don't know what could be causing this behavior, does anyone have any ideas?
Could there be some Azure specific config value for the WebJob that I need to set.
The answer to this was to use the async rabbit handlers which apparently came out in V5.0 of the C# rabbit client. The offical docs still show the sync usage (sadly).
This article is quite good : https://gigi.nullneuron.net/gigilabs/asynchronous-rabbitmq-consumers-in-net/
Once we did this, all was good

Worker stuck in a Sandbox?

Trying to figure out why I can login with my rest API just fine on the main thread but not in a worker. All communication channels are operating fine and I am able to load it up no problem. However, when it tries to send some data it just hangs.
[Embed(source="../bin/BGThread.swf", mimeType="application/octet-stream")]
private static var BackgroundWorker_ByteClass:Class;
public static function get BackgroundWorker():ByteArray
{
return new BackgroundWorker_ByteClass();
}
On a test script:
public function Main()
{
fBCore.init("secrets", "my-firebase-id");
trace("Init");
//fBCore.auth.addEventListener(FBAuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleIOError);
fBCore.auth.email_login("admin#admin.admin", "password");
}
private function handleIOError(e:IOErrorEvent):void
{
trace("IO error");
trace(e.text); //Nothing here
}
private function hanldeFBSuccess(e:AuthEvent):void
{
trace("Main login success.");
trace(e.message);//Complete success.
}
When triggered by a class via an internal worker channel passed from Main on init:
Primordial:
private function handleLoginClick(e:MouseEvent):void
{
login_mc.buttonMode = false;
login_mc.play();
login_mc.removeEventListener(MouseEvent.CLICK, handleLoginClick);
log("Logging in as " + email_mc.text_txt.text);
commandChannel.send([BGThreadCommand.LOGIN, email_mc.text_txt.text, password_mc.text_txt.text]);
}
Worker:
...
case BGThreadCommand.LOGIN:
log("Logging in with " + message[1] + "::" + message[2]); //Log goes to a progress channel and comes to the main thread reading the outputs successfully.
fbCore.auth.email_login(message[1], message[2]);
fbCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, loginSuccess); //Nothing
fbCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleLoginIOError); //Fires
break;
Auth Rest Class: https://github.com/sfxworks/FirebaseREST/blob/master/src/net/sfxworks/firebaseREST/Auth.as
Is this a worker limitation or a security sandbox issue? I have a deep feeling it is the latter of the two. If that's the case how would I load the worker in a way that also gives it the proper permissions to act?
Completely ignored the giveAppPrivelages property in the createWorker function. Sorry Stackoverflow. Sometimes I make bad questions when I get little (or none in this case) sleep the night before.

How to read Azure Service Bus messages from Multiple Queues with one worker

I have three queues and one worker that I want monitoring the three queues (or only two of them)
One queue is qPirate
One queue is qShips
One queue is qPassengers
The idea is that workers will either be looking at all 3 of them, 2 of them, or one of them, and doing different things depending on what the message says.
The key though is that say a message is failing because ship1 is offline, all queues in qships will refresh, workers that are looking at that and other queues will get hung up slightly from it as they will try to process the messages for that queue while only looking at the other queues a little bit, while the other workers that are looking at the other 2 queues and skipping qships will continue to process through messages without holdup or delays.
public static void GotMessage([ServiceBusTrigger("%LookAtAllQueuesintheservicebus%")] BrokeredMessage message)
{
var handler = new MessageHandler();
var manager = new MessageManager(
handler,
"PirateShips"
);
manager.ProcessMessageViaHandler(message);
}
Looking around online I'm guessing this isn't something that's possible, but it seems like it would be? Thanks in advance either way!
Edit1: I'll add the Job Host as well to attempt to clarify things a bit
JobHostConfiguration config = new JobHostConfiguration()
{
DashboardConnectionString = "DefaultEndpointsProtocol=https;AccountName=PiratesAreUs;AccountKey=Yarr",
StorageConnectionString = "DefaultEndpointsProtocol=https;AccountName=PiratesAreUs;AccountKey=Yarr",
NameResolver = new QueueNameResolver()
};
ServiceBusConfiguration serviceBusConfig = new ServiceBusConfiguration()
{
ConnectionString = "Endpoint=AllPirateQueuesLocatedHere;SharedAccessKeyName=PiratesAreUs;SharedAccessKey=Yarr"
};
serviceBusConfig.MessageOptions.AutoComplete = false;
serviceBusConfig.MessageOptions.AutoRenewTimeout = TimeSpan.FromMinutes(1);
serviceBusConfig.MessageOptions.MaxConcurrentCalls = 1;
config.UseServiceBus(serviceBusConfig);
JobHost host = new JobHost(config);
host.RunAndBlock();
Also the QueueNameResolverClass is simply
public class QueueNameResolver : INameResolver
{
public string Resolve(string name)
{
return name;
}
}
I don't appear to have anyway to have the NameResolver be multiple queues, while I can say that I want the jobhost to look at a certain ServiceBus, I don't know how to tell it to look at all the queues within the ServiceBus.
In other words, I want multiple servicebustriggers on this worker so that if a message gets sent to qpirate1 and qships1 which are both located in service bus AllPirateQueuesHere, the worker can pick up the message in qpirate1, process it, then pick up the message in qships1 and process it.
Figured out the answer... This is possible and its simpler than I thought I'm not sure why I didn't connect the dots but I'm still curious why there isn't more documentation about this. Apparently it's simply make a function per queue you want a worker to look at multiple queues. So if you had three queues you'd want something like the below (you can handle each message differently).
public static void GotMessage1([ServiceBusTrigger("%qPirate1%")] BrokeredMessage message)
{
var handler = new MessageHandler();
var manager = new MessageManager(
handler,
"Pirates"
);
manager.ProcessMessageViaHandler(message);
}
public static void GotMessage2([ServiceBusTrigger("%qShip1%")] BrokeredMessage message)
{
var handler = new MessageHandler();
var manager = new MessageManager(
handler,
"Ships"
);
manager.ProcessMessageViaHandler(message);
}
public static void GotBooty([ServiceBusTrigger("%qBooty%")] BrokeredMessage message)
{
var handler = new MessageHandler();
var manager = new MessageManager(
handler,
"Booty"
);
manager.ProcessMessageViaHandler(message);
}

Async Logger. Can I lose/delay log entries?

I'm implementing my own logging framework. Following is my BaseLogger which receives the log entries and push it to the actual Logger which implements the abstract Log method.
I use the C# TPL for logging in an Async manner. I use Threads instead of TPL. (TPL task doesn't hold a real thread. So if all threads of the application end, tasks will stop as well, which will cause all 'waiting' log entries to be lost.)
public abstract class BaseLogger
{
// ... Omitted properties constructor .etc. ... //
public virtual void AddLogEntry(LogEntry entry)
{
if (!AsyncSupported)
{
// the underlying logger doesn't support Async.
// Simply call the log method and return.
Log(entry);
return;
}
// Logger supports Async.
LogAsync(entry);
}
private void LogAsync(LogEntry entry)
{
lock (LogQueueSyncRoot) // Make sure we ave a lock before accessing the queue.
{
LogQueue.Enqueue(entry);
}
if (LogThread == null || LogThread.ThreadState == ThreadState.Stopped)
{ // either the thread is completed, or this is the first time we're logging to this logger.
LogTask = new new Thread(new ThreadStart(() =>
{
while (true)
{
LogEntry logEntry;
lock (LogQueueSyncRoot)
{
if (LogQueue.Count > 0)
{
logEntry = LogQueue.Dequeue();
}
else
{
break;
// is it possible for a message to be added,
// right after the break and I leanve the lock {} but
// before I exit the loop and task gets 'completed' ??
}
}
Log(logEntry);
}
}));
LogThread.Start();
}
}
// Actual logger implimentations will impliment this method.
protected abstract void Log(LogEntry entry);
}
Note that AddLogEntry can be called from multiple threads at the same time.
My question is, is it possible for this implementation to lose log entries ?
I'm worried that, is it possible to add a log entry to the queue, right after my thread exists the loop with the break statement and exits the lock block, and which is in the else clause, and the thread is still in the 'Running' state.
I do realize that, because I'm using a queue, even if I miss an entry, the next request to log, will push the missed entry as well. But this is not acceptable, specially if this happens for the last log entry of the application.
Also, please let me know whether and how I can implement the same, but using the new C# 5.0 async and await keywords with a cleaner code. I don't mind requiring .NET 4.5.
Thanks in Advance.
While you could likely get this to work, in my experience, I'd recommend, if possible, use an existing logging framework :) For instance, there are various options for async logging/appenders with log4net, such as this async appender wrapper thingy.
Otherwise, IMHO since you're going to be blocking a threadpool thread during your logging operation anyway, I would instead just start a dedicated thread for your logging. You seem to be kind-of going for that approach already, just via Task so that you'd not hold a threadpool thread when nothing is logging. However, the simplification in implementation I think benefits just having the dedicated thread.
Once you have a dedicated logging thread, you then only need have an intermediate ConcurrentQueue. At that point, your log method just adds to the queue and your dedicated logging thread just does that while loop you already have. You can wrap with BlockingCollection if you need blocking/bounded behavior.
By having the dedicated thread as the only thing that writes, it eliminates any possibility of having multiple threads/tasks pulling off queue entries and trying to write log entries at the same time (painful race condition). Since the log method is now just adding to a collection, it doesn't need to be async and you don't need to deal with the TPL at all, making it simpler and easier to reason about (and hopefully in the category of 'obviously correct' or thereabouts :)
This 'dedicated logging thread' approach is what I believe the log4net appender I linked to does as well, FWIW, in case that helps serve as an example.
I see two race conditions off the top of my head:
You can spin up more than one Thread if multiple threads call AddLogEntry. This won't cause lost events but is inefficient.
Yes, an event can be queued while the Thread is exiting, and in that case it would be "lost".
Also, there's a serious performance issue here: unless you're logging constantly (thousands of times a second), you're going to be spinning up a new Thread for each log entry. That will get expensive quickly.
Like James, I agree that you should use an established logging library. Logging is not as trivial as it seems, and there are already many solutions.
That said, if you want a nice .NET 4.5-based approach, it's pretty easy:
public abstract class BaseLogger
{
private readonly ActionBlock<LogEntry> block;
protected BaseLogger(int maxDegreeOfParallelism = 1)
{
block = new ActionBlock<LogEntry>(
entry =>
{
Log(entry);
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = maxDegreeOfParallelism,
});
}
public virtual void AddLogEntry(LogEntry entry)
{
block.Post(entry);
}
protected abstract void Log(LogEntry entry);
}
Regarding the loosing waiting messages on app crush because of unhandled exception, I've bound a handler to the event AppDomain.CurrentDomain.DomainUnload. Goes like this:
protected ManualResetEvent flushing = new ManualResetEvent(true);
protected AsyncLogger() // ctor of logger
{
AppDomain.CurrentDomain.DomainUnload += CurrentDomain_DomainUnload;
}
protected void CurrentDomain_DomainUnload(object sender, EventArgs e)
{
if (!IsEmpty)
{
flushing.WaitOne();
}
}
Maybe not too clean, but works.

Resources