I'm doing my first project but large one on developing Azure Application with Intergration Component.
Currently most of the integration are done using SSIS Packages and would like to transform them on to Worker Role in Azure.
Could someone please help me to understand the following queries regarding Worker Role please?
Is there way to start or stop the Worker role (just like SSIS or Windows Schedulers) via GUI? If not how to achieve this?
How do I know my worker role has been running or not running (including why it's not running ie. logs)
How do I spin multiple worker role based on time (i.e. (9:00AM to 11:00AM spin 4 roles and scale down on quiet period)
Does the following code creates any poison message or dead lock (if multiple there are 10,000 messages to process and every 5 seconds the new thread (Processsing.run) is started?
while(true)
{
var thread = new Thread(Run);
thread.start();
Thread.Sleep(5000);
Trace.WriteLine("Working", "Information");
}
public class PhotoProcessing
{
public static void Run()
{
// Read from queue
CloudQueueMessage msg =
Storage.Queue.GetNextMessage();
while(msg != null)
{
string[] message = msg.AsString.Split('$');
if(message.Length == 2)
{
AddWatermark(message[0], message[1]);
}
// Message has been read so remove it
Storage.Queue.DeleteMessage(msg);
// Get next message if any
msg = Storage.Queue.GetNextMessage();
}
}
Is there way to start or stop the Worker role (just like SSIS or Windows Schedulers) via GUI? If not how to achieve this?
There are actually many ways to achieve this. You can use Windows Azure Portal to do or you could use 3rd party tools (like our Cloud Storage Studio) or you could write your own application using Windows Azure Service Management API (http://msdn.microsoft.com/en-us/library/ee460799.aspx)
How do I know my worker role has been running or not running (including why it's not running ie. logs)
Again you could use one of the GUI based tools to see the status of your roles. As far as why the roles are not running, you would need to enable Windows Azure Diagnostics in your worker role (http://msdn.microsoft.com/en-us/library/gg433048.aspx)
How do I spin multiple worker role based on time (i.e. (9:00AM to 11:00AM spin 4 roles and scale down on quiet period)
You can write your own application using Windows Azure Service Management API to do so or you could make use of 3rd party tools like AzureWatch from Paraleap or Azure Management Cmdlets (both from Microsoft and our company). While the cmdlets will get the job done, I believe Azure Watch is much more sophisticated solution. We wrote a blog post for autoscaling some days back which you can find here: http://www.cerebrata.com/Blog/post/Scale-your-Windows-Azure-instances-with-Azure-Management-Cmdlets.aspx.
Related
I am using a console application that runs on on-premise servers triggered by a task scheduler. This console application performs the various actions and needed to be logged these. It would generate logs of around 200kb per run and the console app runs every hour.
Since the server is not accessible to us, I am planning to store the logs to Azure. I read about blob/table storage.
I would like to know what is the best strategy to store the logs in Azure.
Thank you.
Though you can write logging data in Azure Storage (both Blobs and Tables), it would actually make more sense if you use Azure Application Insights for logging this data.
I recently did the same for a console application I built. I found it incredibly simple.
I created an App Insight Resource in my Azure Subscription and got the instrumentation key. I then installed App Insights SDK and referenced appropriate namespaces in my project.
using Microsoft.ApplicationInsights;
using Microsoft.ApplicationInsights.DataContracts;
using Microsoft.ApplicationInsights.Extensibility;
This is how I initialized the telemetry client:
var appSettingsReader = new AppSettingsReader();
var appInsightsInstrumentationKey = (string)appSettingsReader.GetValue("AppInsights.InstrumentationKey", typeof(string));
TelemetryConfiguration configuration = TelemetryConfiguration.CreateDefault();
configuration.InstrumentationKey = appInsightsInstrumentationKey;
telemetryClient = new TelemetryClient(configuration);
telemetryClient.InstrumentationKey = appInsightsInstrumentationKey;
For logging trace data, I simply did the following:
TraceTelemetry telemetry = new TraceTelemetry(message, SeverityLevel.Verbose);
telemetryClient.TrackTrace(telemetry);
For logging error data, I simply did the following:
catch (Exception excep)
{
var message = string.Format("Error. {0}", excep.Message);
ExceptionTelemetry exceptionTelemetry = new ExceptionTelemetry(excep);
telemetryClient.TrackException(exceptionTelemetry);
telemetryClient.Flush();
Task.Delay(5000).Wait();//Wait for 5 seconds before terminating the application
}
Just keep one thing in mind though: Make sure you wait for some time (5 seconds is good enough) to flush the data before terminating the application.
If you're still keen on writing logs to Azure Storage, depending on the logging library you're using you will find suitable adapters that will write directly into Azure Storage.
For example, there's an NLog target for Azure Tables: https://github.com/harouny/NLog.Extensions.AzureTableStorage (though this project is not actively maintained).
A seeming simple ask, but had no luck finding the best way.
So i need to log application events from a console app that will spin up inside a container and do some work then die.
How can i log custom data from inside?
I've tried Azure Monitor and created a workspace and used HTTP Data Collector API inside the app but no joy in working out where logs are being stored.
Is there a simple way to log to an Azure Storage account and then using Azure Monitor to manage the events?
I've been googling for hours but a lot of posts are 8 years old and not relevant and i cannot really find a simple use case in modern azure.
Perhaps it's so simple i just cannot see it
Any pointers or links greatly received!
thanks
Paul
Why not trace events using Application Insight custom events ?
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-custom-events-metrics
With that you can trace events with any metadata and check them in the Azure Application Insights Blade or reach them by the Application Insights SDK or The Api.
You just need to create an Application Insight instance and use the Telemetry Key to do that.
SDK: https://github.com/Microsoft/ApplicationInsights-dotnet
API : https://dev.applicationinsights.io/reference
Code sample to write events:
TelemetryClient client = new TelemetryClient();
client .InstrumentationKey = "INSERT YOUR KEY";
client.TrackEvent("SomethingInterestingHappened");
Also you can send more than just an string value:
tc.TrackEvent("PurchaseOrderSubmitted",
new Dictionary<string, string>()
{
{"CouponCode", "JULY2015" }
}, new Dictionary<string, double>()
{
{"OrderTotal", 68.99 },
{"ItemsOrdered", 5}
});
I want to migrate typical windows service which is written in .net to Azure using Service fabric.
To implement this , I am creating one service fabric application containing one micro service as guest executable which uses .exe of windows service and deploying application package to service fabric cluster.
After deploying service fabric application on cluster I want windows service should install & start automatically on all nodes however at any time application is running on any single node. I want windows service should run on only one node at a time.
Please kindly help to implement this.
You can certainly run your service as a guest executable. Making sure it only runs on one node can be done by setting the instance count to 1 in the manifest, like so:
<Parameters>
<Parameter Name="GuestService_InstanceCount" DefaultValue="-1" />
</Parameters>
...
<DefaultServices>
<Service Name="GuestService">
<StatelessService ServiceTypeName="GuestServiceType"
InstanceCount="[GuestService_InstanceCount]">
<SingletonPartition />
</StatelessService>
</Service>
</DefaultServices>
Or, you could actually migrate it, not just re-host it in the SF environment...
If your Windows Service is written in .NET and the you wan't to benefit from Service Fabric then the job of migrating the code from a Windows Service to a Reliable Service in Service Fabric should not be to big.
Example for a basic service:
If you start by creating a Stateless Service in a Service Fabric application you end up with a service implementation that looks like (comments removed):
internal sealed class MigratedService : StatelessService
{
public MigratedService(StatelessServiceContext context)
: base(context)
{ }
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new ServiceInstanceListener[0];
}
protected override async Task RunAsync(CancellationToken cancellationToken)
{
// TODO: Replace the following sample code with your own logic
// or remove this RunAsync override if it's not needed in your service.
long iterations = 0;
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
ServiceEventSource.Current.ServiceMessage(this.Context, "Working-{0}", ++iterations);
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
}
The RunAsync method starts running as soon as the Service is up and running on a node in the cluster. It will continue to run until the cluster, for some reason, decides to stop the service, or move it to another node.
In your Windows Service code you should have a method that is run on start. This is usually where you set up a Timer or similar to start doing something on a continuous basis:
protected override void OnStart(string[] args)
{
System.Timers.Timer timer = new System.Timers.Timer();
timer.Interval = 60000; // 60 seconds
timer.Elapsed += new System.Timers.ElapsedEventHandler(this.OnTimer);
timer.Start();
}
public void OnTimer(object sender, System.Timers.ElapsedEventArgs args)
{
...
DoServiceStuff();
Console.WriteLine("Windows Service says hello");
}
Now grab that code in OnTimer and put it in your RunAsync method (and any other code you need):
protected override async Task RunAsync(CancellationToken cancellationToken)
{
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
DoServiceStuff();
ServiceEventSource.Current.ServiceMessage(this.Context,
"Reliable Service says hello");
await Task.Delay(TimeSpan.FromSeconds(60), cancellationToken);
}
}
Note the Task.Delay(...), it should be set to the same interval as your Windows Service had for it's Timer.
Now, if you have logging in your Windows Service and you use ETW, then that should work out of the box for you. You simply need to set up some way of looking at those logs from Azure now, for instance using Log Analytics (https://learn.microsoft.com/en-us/azure/log-analytics/log-analytics-service-fabric).
Other things you might have to migrate is if you run specific code on shut down, on continue, and if you have any parameters sent to the service on startup (for instance connection strings to databases). Those need to be converted to configuration settings for the service, look at SO 33928204 for a starting point for that.
The idea behind service fabric is so that it manages your services, from deployment and running. Once you've deployed your service/application to the service fabric instance it will be just like running a windows service (kinda) so you wont need to install your windows service. If you're using something like TopShelf you can just run the exe and everything will run totally fine within service fabric.
I'm trying to find a solution that I can use to perform virus scanning on files that have been uploaded to Azure blob storage. I wanted to know if it is possible to copy the file to local storage on a Worker Role instance, call Antimalware for Azure Cloud Services to perform the scan on that specific file, and then depending on whether the file is clean, process the file accordingly.
If the Worker Role cannot call the scan programmatically, is there a definitive way to check if a file has been scanned and whether it is clean or not once it has been copied to local storage (I don't know if the service does a real-time scan when new files are added, or only runs on a schedule)?
There isn't a direct API that we've found, but the anti-malware services conform to the standards used by Windows desktop virus checkers in that they implement the IAttachmentExecute COM API.
So we ended up implementing a file upload service that writes the uploaded file to a Quarantine local resource, then calling the IAttachmentExecute API. If the file is infected then, depending on the anti-malware service in use, it will either throw an exception, silently delete the file or mark it as inaccessible. So by attempting to read the first byte of the file, we can test if the file remains accessible.
var type = Type.GetTypeFromCLSID(new Guid("4125DD96-E03A-4103-8F70-E0597D803B9C"));
var svc = (IAttachmentExecute)Activator.CreateInstance(type);
try {
svc.SetClientGuid(ref clientGuid);
svc.SetLocalPath(path);
svc.Save();
}
finally
{
svc.ClearClientState();
}
using (var fileStream = File.OpenRead(path))
{
fileStream.ReadByte();
}
[Guid("73DB1241-1E85-4581-8E4F-A81E1D0F8C57")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
public interface IAttachmentExecute
{
void SetClientGuid(ref Guid guid);
void SetLocalPath(string pszLocalPath);
void Save();
void ClearClientState();
}
I think the best way for you to know is simply take an Azure VM (IaaS) and activate Microsoft Antimalware extension. Then you may log into it and do all the necessary check and tests against the service.
Later, you will apply all this into the Worker Role (there is a similar PaaS extension available for that, calles PaaSAntimalware).
See the next excerpt from https://msdn.microsoft.com/en-us/library/azure/dn832621.aspx:
"In PaaS, the VM agent is called GuestAgent, and is always available on Web and Worker Role VMs. (For more information, see Azure Role Architecture.) The VM agent for Role VMs can now add extensions to the cloud service VMs in the same way that it does for persistent Virtual Machines.
The biggest difference between VM Extensions on role VMs and persistent VMs is that with role VMs, extensions are added to the cloud service first and then to the deployments within that cloud service.
Use the Get-AzureServiceAvailableExtension cmdlet to list all available role VM extensions."
Background: I've deployed an MVC3 application to 2 Azure Web Role instances, but I'm confused as to how I can test out the possibility of one of these instances failing.
Is there a way that I can test to ensure that my Web Role code works seamlessly when one of my instances is taken offline?
Can I manually stop one of them? Or somehow configure the load balancer to force all traffic to one of the servers?
Thanks!
If you have RDP access enabled to your instances you can very easily remove one or more instance out from LoadBalancer even when the instance is running healthy without writing any line of code. You just need to RDP to your instance and then use PowerShell scripts to take the instance off from loadbalancer. In my following blog I have described the exact procedure:
http://blogs.msdn.com/b/avkashchauhan/archive/2012/01/27/windows-azure-troubleshooting-taking-specific-windows-azure-instance-offline.aspx
The above details also help to run load testing by removing N instances from total M instances .
If you use the StatusCheck event to set the status to Busy, your instance won't receive any more requests from the loadbalancer. You might want to write some code to send messages to your instances that will set them as busy for some time (using queues for example).
public override bool OnStart()
{
RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;
return base.OnStart();
}
// Use the busy object to indicate that the status of the role instance must be Busy
private volatile bool busy = true;
private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
If (this.busy)
{
// Sets the status of the role instance to Busy for a short interval.
// If you want the role instance to remain busy, add code to
// continue to call the SetBusy method
e.SetBusy();
}
}
Reference: http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.statuscheck.aspx
Besides that you can also simply reboot your instance (using the Windows Azure portal or Remote Desktop).
Steve Marx (aka smarx) developed a tool called WazMonkey. It is the azure counterpart of the tool Chaos Monkey developed by the netflix team to simulate broken instances in amazon AWS. WazMonkey terminates instances of a windows azure cloud service randomly to test the resilience of a cloud applications.