how to set up Azure VM Role instances ready/busy programmatically? - azure

is there any way to change the status of the vm role instances from busy to ready.
I would like to do this with wcf service if it is possible.
Thanks a lot.

The Fabric Controller will check the status of your instance at regular intervals, and when doing so you'll be able to let it know if the instance is busy or not.
You'll simply need to handle the StatusCheck event and set it to busy (by calling the SetBusy method). Once you decide that the instance is ready (no longer busy), stop calling the SetBusy method.
public override bool OnStart()
{
RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;
return base.OnStart();
}
// Use the busy object to indicate that the status of the role instance must be Busy
private volatile bool busy = true;
private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
If (this.busy)
{
// Sets the status of the role instance to Busy for a short interval.
// If you want the role instance to remain busy, add code to
// continue to call the SetBusy method
e.SetBusy();
}
}
Reference: http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.statuscheck.aspx

Related

Azure Cloud Service: RoleEnvironment.StatusCheck event not firing

I am maintaining a legacy Cloud Services application hosted on Azure targeting .net 4.6.1. Inside the Application_Start method of the Global.asax on the Web Role we are registering an event handler for RoleEnvironment.StatusCheck however our logs are demonstrating that this event call back is never being called or triggered.
According to this blog: https://convective.wordpress.com/2010/03/18/service-runtime-in-windows-azure/ we were expecting this event to be triggered every 15 seconds and we believe this was happening however has since stopped. We expect that the stopped working around the time we installed some new DLLs into the solution (some of these dlls include: Microsoft.Rest.ClientRuntime.dll, Microsoft.Azure.Storage.Common.dll, Microsoft.Azure.Storage.Blob.dll, Microsoft.Azure.KeyVault.dll)
We've tried RDP-ing onto the VM to check the event logs but nothing obvious is there. Any suggestions on where we may be able to search for clues?
It seems your event handler is not registered. Try below code with a different approach:
public class WorkerRole : RoleEntryPoint
{
public override bool OnStart()
{
RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;
return base.OnStart();
}
// Use the busy object to indicate that the status of the role instance must be Busy
private volatile bool busy = true;
private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
if (this.busy)
{
// Sets the status of the role instance to Busy for a short interval.
// If you want the role instance to remain busy, add code to
// continue to call the SetBusy method
e.SetBusy();
}
}
public override void Run()
{
Trace.TraceInformation("Worker entry point called", "Information");
while (true)
{
Thread.Sleep(10000);
}
}
public override void OnStop()
{
base.OnStop();
}
}

Entity Framework Context on multiple threads

I've seen so many questions similar to mine, but no answers that quite seem to apply to my situation.
My ASP.NET MVC app with EF 6 Code first and Unity has a web service that adds something to the database, then fires off another thread that adds more stuff to the database. The reason for using the other thread is to return the original request as quickly as possible. The context class is obtained using the Unity container RegisterType().
I've got lots of repository classes all using the same context, so to make sure they get the same instance I could use the PerRequestLifetimeManager in my Unity container, and that's fine for the http request threads but that the other threads can't use the context returned by the PerRequestLifetimeManager because this is only valid on the original http request thread.
So, I can use the PerThreadLifetimeManager. This is great because now the main request thread and the other thread it kicks off get the same instance of the context returned by Unity. The trouble is that so do other requests if they get given the same thread, so this is no good either.
So how can I configure things so that the request threads get their own PerRequest Lifetime Manager created context, and other threads get a different context?
The issue is made a little more difficult by the fact that the new thread calls other classes that need to use a context instance. However, these other classes can be used from the main request thread or the new thread, so grabbing a context instance when the thread is started and then passing it around will be tricky.
Thanks in advance
No takers then...
I'm going to have a go at answering my own question, but could do with some thoughts on my approach.
So I can't use the PerRequestLifetimeManager because worker threads can't use the context that this returns, but I can't use the PerThreadLifetimeManager because the context can last the lifetime of several HTTP requests. This class attempts to provide the best of both worlds.
/// <summary>
/// For the context class the PerRequestLifetimeManager is the most suitable lifetime manager,
/// but this doesn't work when a new worker thread is started as this needs to access the context.
/// The PerThreadLifetimeManager is no good either as the context can last for more than on request.
/// This class attempts to give the best of both worlds: per request lifetime management for HTTP requests
/// and thread storage for worker threads.
/// </summary>
public class PerRequestOrThreadLifetimeManager : PerRequestLifetimeManager, IDisposable
{
private const string threadDataSlotName = "PerRequestOrThreadLifetimeManager";
public override object GetValue()
{
if (System.Web.HttpContext.Current != null)
{
return base.GetValue();
}
else
{
return getManagedObject();
}
}
public override void RemoveValue()
{
throw new NotImplementedException();
}
public override void SetValue(object newValue)
{
if (System.Web.HttpContext.Current != null)
{
base.SetValue(newValue);
}
else
{
Thread.SetData(Thread.GetNamedDataSlot(threadDataSlotName), newValue);
}
}
private object getManagedObject()
{
return Thread.GetData(Thread.GetNamedDataSlot(threadDataSlotName));
}
public void Dispose()
{
try
{
IDisposable obj = getManagedObject() as IDisposable;
if (obj != null)
{
obj.Dispose();
obj = null;
}
}
catch { }
}
}

Ninject - In what scope DbContext should get binded when RequestScope is meaningless?

In an MVC / WebAPI environment I would use InRequestScope to bind the DbContext.
However, I am now on a Console application / Windows service / Azure worker role (doesn't really matter, just there's no Web request scope), which periodically creates a number of Tasks that run asynchronously. I would like each task to have its own DbContext, and since tasks run on their own thread, I tried binding DbContext using InThreadScope.
Unfortunately, I realize that the DbContext is not disposed when a task is finished. What actually happens is, the thread returns to the Thread Pool and when it is assigned a new task, it already has a DbContext, so DbContexts stay alive forever.
Is there a way InThreadScope can be used here or should I use some other scope? How can ThreadScope be used when threads are returning from ThreadPool every now and then?
If you decide to go on with custom scope, the solution is:
public sealed class CurrentScope : INotifyWhenDisposed
{
[ThreadStatic]
private static CurrentScope currentScope;
private CurrentScope()
{
}
public static CurrentScope Instance => currentScope ?? (currentScope = new CurrentScope());
public bool IsDisposed { get; private set; }
public event EventHandler Disposed;
public void Dispose()
{
this.IsDisposed = true;
currentScope = null;
if (this.Disposed != null)
{
this.Disposed(this, EventArgs.Empty);
}
}
}
Binding:
Bind<DbContext>().To<MyDbContext>().InScope(c => CurrentScope.Instance)
And finally:
using (CurrentScope.Instance)
{
// your request...
// you'll get always the same DbContext inside of this using block
// DbContext will be disposed after going out of scope of this using block
}

Can I be sure that once Azure role Stopping has been raised the role is never rerun in the same process?

Azure role have RoleEnvironment.Stopping event that is raised when the role is being stopped. I discovered some issue in some unrelated code that needs special treatment in cases when the role is being stopped. Something like:
public class SomeFarAwayClass {
void someFarAwayFunction()
if( roleIsBeingStopped ) {
workSpecially();
} else {
workUsually();
}
}
}
Now I want to subscribe to RoleEnvironment.Stopping and in the event handler raise the roleIsBeingStopped permanently. Something like this:
public class SomeFarAwayClass {
//
private static bool roleIsBeingStopped = false;
public void SetBeingStopped() { roleIsBeingStopped = true; }
}
class MyRoleClass : RoleEntryPoint {
overribe bool OnStart()
{
RoleEnvironment.Stopping += stopping;
return base.OnStart();
}
void stopping(object sender, RoleEnvironmentStoppingEventArgs args)
{
SomeFarAwayClass.SetBeingStopped();
}
}
This solution implies that the role is never restarted in the same process, otherwise I'll need to reset the flag at some point. So far I've never seen Azure roles being restarted in the same process, it's a new process every time.
Can I be sure that once Azure role Stopping has been raised the role is never rerun in the same process?
I think you probably can, but at the same time you don't need to, because you also have the OnStart call you could use to re-set the flag.
I generally prefer to not rely on thing out of my control where I don't have to (of which there are many!), this would be one I'd avoid personally.

Azure VMs restart unexpectedly

This is a problem is related to worker role hosted VM. I have a simple worker role, which spans a process inside of it. The process spawned is the 32 bit compiled TCPServer application. Worker role has a endpoint defined in it, the TCPserver is bound to the end point of the Worker role. So when I connect to my worker role endpoint, and send something, TCPserver recieves it , processes it returns something back. So here the endpoint of the worker role which is exposed to outside world, internally connects to TCPserver.
string port = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints[""TCPsocket].IPEndpoint.Port.ToString();
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(localstorage.RootPath, "TCPServer.exe"))
{
CreateNoWindow = true,
UseShellExecute = true,
WorkingDirectory = localstorage.RootPath,
Arguments = port
}
};
It was working fine. But suddenly sever stopped to respond. When I checked in portal, VM role was restarting automatically. But it never succeeded. It was showing Role Initializing.. status. Manual stop and start also din't work. I redeployed the same package without any change in the code. This time deployment itself failed.
Warning: All role instances have stopped
- There was no endpoint listening at https://management.core.windows.net/<SubscriptionID>/services/hostedservices/TCPServer/deploymentslots/Production that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.
But after some time again I tried to deploy, it worked fine.
Can anyone tell me what would be the problem?
Update:
public override void Run()
{
Trace.WriteLine("RasterWorker entry point called", "Information");
string configVal = RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
CloudStorageAccount _storageAccount = null;
_storageAccount = CloudStorageAccount.Parse(configVal); // accepts storage cridentials and create storage account
var localstorage = RoleEnvironment.GetLocalResource("MyLocalStorage");
CloudBlobClient _blobClient = _storageAccount.CreateCloudBlobClient();
bool flag = false;
while (true)
{
Thread.Sleep(30000);
if (!flag)
{
if (File.Exists(Path.Combine(localstorage.RootPath, "test.ppm")))
{
CloudBlobContainer _blobContainer = _blobClient.GetContainerReference("reports");
CloudBlob _blob = _blobContainer.GetBlobReference("test.ppm");
_blob.UploadFile(Path.Combine(localstorage.RootPath, "test.ppm"));
Trace.WriteLine("Copy to blob done!!!!!!!", "Information");
flag = true;
}
else
{
Trace.WriteLine("Copy Failed-> File doesnt exist!!!!!!!", "Information");
}
}
Trace.WriteLine("Working", "Information");
}
}
To prevent your worker role to be restart you'll need to block the Run method of your entry point class.
If you do override the Run method, your code should block
indefinitely. If the Run method returns, the role is automatically recycled by raising the Stopping event and calling the OnStop method
so that your shutdown sequences may be executed before the role is
taken offline.
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.windowsazure.serviceruntime.roleentrypoint.run.aspx
You need to make sure that, whatever happens, you never return from the Run method if you want to keep the role alive.
Now, if you're hosting the TCPServer in a console application (I'm assuming you're doing this since you pasted the Process.Start code), you'll need to block the Run method after starting the process.
public override void Run()
{
try
{
Trace.WriteLine("WorkerRole entrypoint called", "Information");
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(localstorage.RootPath, "TCPServer.exe"))
{
CreateNoWindow = true,
UseShellExecute = true,
WorkingDirectory = localstorage.RootPath,
Arguments = port
}
};
myProcess.Start();
while (true)
{
Thread.Sleep(10000);
Trace.WriteLine("Working", "Information");
}
// Add code here that runs in the role instance
}
catch (Exception e)
{
Trace.WriteLine("Exception during Run: " + e.ToString());
// Take other action as needed.
}
}
PS: This has nothing to do with your deployment issue, I assume this was a coincidence

Resources