How do I create new instances of some role via C# using Azure emulator? Is there some guide about that? There are some manuals about creating instances in the cloud, not in emulator.
So far I know that:
I need to change config-file. Is it config in sln-file or in some temp-delpoyment folder?
I need to use csrun tool. How to pick params?
UPD
Got it.
To change count or instances on emulator, you have to:
update 'ServiceConfiguration.cscfg' file in bin-folder
run 'csrun' tool with params: string.Format("/update:{0};\"{1}\"", deploymentId, "<path to ServiceConfiguration.cscfg>")
where deploymentId:
// get id from RoleEnvironment with regex
var patternt = Regex.Escape("(") + #"\d+" + Regex.Escape(")");
var input = RoleEnvironment.DeploymentId;
var m = Regex.Match(input, patternt);
var deploymentId = m.ToString().Replace("(", string.Empty).Replace(")", string.Empty);
If you have troubles running csrun via code, read this:
http://social.msdn.microsoft.com/Forums/en/windowsazuredevelopment/thread/62ca1372-2388-4181-9dbd-8fbba470ea77
In local emulator, you need to modify the CSCFG file under the deployment .csx folder, instead of your source code folder, since the local emulator will fire your application up from that folder.
Once you modified the saved your CSCFG file, for example the count of the instances you can retrieve the new value from your code immediately. But if you want the local emulator detect this changes and perform the related actions, such as increase the VMs or invoke the Configuration_Changed method, you need to execute
csrun /update:;
You can retrieve the deployment id from the compute emulator UI.
You can find the instance count in the ServiceConfiguration.cscfg in your Azure project
Related
I'm looking to use the Change Feed Processor SDK to monitor for changes to an Azure Cosmos DB collection, however, I have not seen clear documentation about whether the host can be run as an Azure Web Job. Can it? And if yes, are there any known issues or limitations versus running it as a Console App?
There are a good number of blog posts about using the CFP SDK, however, most of them vaguely mention running the host on a VM, and none of them or any examples running the host as an azure web job.
Even if it's possible, as a side question is, if such a host is deployed as a continuous web job and I select the "Scale" setting of the web job to Multi Instance, what are the approaches or recommendations to make the extra instances run with a different instance name, which the CFP SDK requires?
According to my research,Cosmos db trigger could be implemented in the WebJob SDK.
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddCosmosDB(a =>
{
a.ConnectionMode = ConnectionMode.Gateway;
a.Protocol = Protocol.Https;
a.LeaseOptions.LeasePrefix = "prefix1";
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
But it seems only Nuget for c# sdk could be used,no clues for other languages.So,you could refer to the Compare Functions and WebJobs to balance your needs and cost.
The Cosmos DB Trigger for Azure Functions it's actually, a WebJobs extension: https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/src/WebJobs.Extensions.CosmosDB
And it uses the Change Feed Processor.
Functions run over WebJob technology. So to answer the question, yes, you can run Change Feed Processor on WebJobs, just make sure that:
Your App Service is set to Always On
If you plan to use multiple instances, make sure to set the InstanceName accordingly and not a static/fixed value. Probably something that identifies the WebJob instance.
I have a webjob that I have set up to be triggered when a file is added to a directory:
[FileTrigger(#"<DIR>\<dir>\{name}", "*", WatcherChangeTypes.Created, autoDelete: true)] Stream file,
I have it configured:
var config = new JobHostConfiguration
{
JobActivator = new NinjectActivator(kernel)
};
var filesConfig = new FilesConfiguration();
#if DEBUG
filesConfig.RootPath = #"C:\Temp\";
#endif
config.UseFiles(filesConfig);
config.UseCore();
The path is for working locally and I was expecting that commenting out the FilesConfiguration object leaving it default would allow it to pick up the connection string I have set up and trigger when files are added. This does not happen it turns out that by default the RootPath is set to "D:\Home" and produces an InvalidOperationException
System.InvalidOperationException : Path 'D:\home\data\<DIR>\<dir>' does not exist.
How do I get the trigger to point at the File storage area of the storage account I have set up for it. I have tried removing the FilesConfiguration completely from Program.cs in the hope that it would work against the settings but it only produces the same Exception.
System.InvalidOperationException : Path 'D:\home\data\\' does not exist.
When you publish to azure, the default directory is D:\HOME\DATA, so when you run webjob it could not find the path so you get the error message.
How do I get the trigger to point at the File storage area of the storage account I have set up for it.
The connectionstring you have set have two applies: one is used for dashboard logging and the other is used for application functionality (queues, tables, blobs).
It seems that you could not get filetrigger working with azure file storage.
So, if you want to invoke your filetrigger when you create new file, you could go to D:\home\data\ in KUDU to create a DIR folder and then create new .txt file in it.
The output is as below:
BTW, it seems that you'd better not use autoDelete when you create file, if use you will get error like:
NotSupportedException: Use of AutoDelete is not supported when using change type 'Changed'.
I am new to Azure batch. I have to perform a task on nodes in the pool.
The approach I am using is that I have the code that I want to run on the node. I am making a zip of the jar of the .class file and uploading to my Azure storage account and then getting the application Id and putting it in ApplicationPackageReference and adding to my task in job.
String applicationId= "TaskPerformApplicationPack";
ApplicationPackageReference reference = new ApplicationPackageReference().withApplicationId(applicationId);
List<ApplicationPackageReference> list = new ArrayList<ApplicationPackageReference>();
list.add(reference);
TaskAddParameter taskToAdd = new TaskAddParameter().withId("mytask2").withApplicationPackageReferences(list);
taskToAdd.withCommandLine(String.format("java -jar task.jar"));
batchClient.taskOperations().createTask(jobId, taskToAdd);
Now when I run this, my task fails giving an error that
access for one of the specified Azure Blob(s) is denied
How can I run a particular code on a node using azure batch job tasks?
I think a good place to start is: (I have covered most of the helpful links along with the guided docs below, they will elaborate the use of environment level variable etc, also I have included few sample links as well.) hope material and sample below will help you. :)
Also I would recommend to recreate your pool if it is old which will ensure you have the node running at the latest version.
Azure batch learning path:
Samples & demo link or look here
Detailed walk through depending on what you are using i.e. CloudServiceConfiguration or VirtualMachineConfiguration link.
Further to add from the article: also look in here: Application Packages with VM configuration
In particular this link will take you through the guide process of how to use it in your code: Also be it resource file or package you need to make sure that they are uploaded and available for use at the batch level.
along with sample like: (below is a pool level pkg example)
// Create the unbound CloudPool
CloudPool myCloudPool =
batchClient.PoolOperations.CreatePool(
poolId: "myPool",
targetDedicatedComputeNodes: 1,
virtualMachineSize: "small",
cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4"));
// Specify the application and version to install on the compute nodes
myCloudPool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference {
ApplicationId = "litware",
Version = "1.1" }
};
// Commit the pool so that it's created in the Batch service. As the nodes join
// the pool, the specified application package is installed on each.
await myCloudPool.CommitAsync();
For the Task level form the link above a sample is: (make sure you have followed the steps correctly mentioned here.
CloudTask task =
new CloudTask(
"litwaretask001",
"cmd /c %AZ_BATCH_APP_PACKAGE_LITWARE%\\litware.exe -args -here");
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "litware",
Version = "1.1"
}
};
further to add: be it CloudServiceCOhnfiguration or VirtualMachineConfiguration, An application package is **a .zip file** that contains the application binaries and supporting files that are required for your tasks to run the application. Each application package represents a specific version of the application. from reference: 4
I gave it a shot and tried and was successful, so I am not able to replicate the error above and seems like you might be missing something.
I'm trying to understand the various ways to configure the Diagnostics in Windows Azure.
So far I've set a diagnostics.wadcfg that is properly used by Azure as I retrieve its content in the xml blob stored by Diagnostics in the wad-control-container (and the tables are updated at the correct refresh rate).
Now I would like to override some fields from the cscfg, in order to boost the log transfer period for all instances, for example (without having to update each wad-control-container file, which will be erased in case of instance recycle btw).
So in my WebRole.Run(), I get a parameter from RoleEnvironment.GetConfigurationSettingValue() and try to apply it to the current config ; but my problem is that the values I read from DiagnosticMonitor.GetDefaultInitialConfiguration() do not correspond to the content of my diagnostics.wadcfg, and setting new values in there doesn't seem to have any effect.
Can anyone explain the relationship between what's taken from diagnostics.wadcfg and the values you can set at run-time?
Thanks
GetDefaultInitialConfiguration() will not return you your current settings, becasue as its name states it takes a default configuration. You have to use the GetCurrentConfiguration method if you need to take the configuration that is in place.
However, if you need to just boost the transfer, you could use for example the Cerebrata's Azure Diagnostics Manager to quickly kick off on-demand transfer of your roles.
You could also use the Windows Azure Diagnostics Management cmdlets for powershell. Check out this article.
Hope this helps!
In order to utilize values in wadcfg file the following code code could be used to access current DiagnosticsMonitorConfiguration:
var cloudStorageAccount = CloudStorageAccount.Parse(
RoleEnvironment.GetConfigurationSettingValue(WADStorageConnectionString));
var roleInstanceDiagnosticManager = cloudStorageAccount.CreateRoleInstanceDiagnosticManager(
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc);
The code by Boris Lipshitz doesn't work now (Breaking Changes in Windows Azure Diagnostics (SDK 2.0)): "the DeploymentDiagnosticManager constructor now accepts a connection string to the storage account instead of a CloudStorageAccount object".
Updated code for SDK 2.0+:
var roleInstanceDiagnosticManager = new RoleInstanceDiagnosticManager(
// Add StorageConnectionString to your role settings for this to work
CloudConfigurationManager.GetSetting("StorageConnectionString"),
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc)
I am trying to run a C executable on Azure. I have many workerRoles and they continuously check a Job Queue. If there is a job in the queue, a worker role runs an instance of the C executable as a process according to the command line arguments stored in a job class. The C executable creates some log files normally. I do not know how to access those created files. What is the logic behind it? Where are the created files stored? Can anyone explain me? I am new to azure and C#.
One other problem is that all of the working instances of the C executable need to read a data file. How can I distribute that required file?
First, realize that in Windows Azure, your worker role is simply running inside a Windows 2008 Server environment (either SP2 or R2). When you deploy your app, you would deploy your C executable as well (or grab it from blob storage, but that's a bit more advanced). To find out where your app lives on disk, call Environment.GetEnvironmentVariable("RoleRoot") - that returns a path. You'd typically have your app sitting in a folder called AppRoot under the role root. You'd find your C executable there.
Next, you'll want your app to write its files to an output directory you specify on the command line. You can set up storage in your local VM with your role's properties. Look at the Local Storage tab, and configure a named local storage area:
Now you can get the path to that storage area, in code, and pass it as a command line argument:
var outputStorage = RoleEnvironment.GetLocalResource("MyLocalStorage");
var outputFile = Path.Combine(outputStorage.RootPath, "myoutput.txt");
var cmdline = String.Format("--output {0}", outputFile);
Here's an example of launching your myapp.exe process, with command line arguments:
var appRoot = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot")
+ #"\", #"approot");
var myProcess = new Process()
{
StartInfo = new ProcessStartInfo(Path.Combine(appRoot, #"myapp.exe"), cmdline)
{
CreateNoWindow = false,
UseShellExecute = false,
WorkingDirectory = appRoot
}
};
myProcess.WaitForExit();
Normally you'd set CreateNoWindow to true, but it's easier to debug if you can see the command shell window.
Last thing: Once your app is done creating the file, you'll want to either:
Process it and delete it (it's not in a durable place so eventually it'll disappear)
Change your storage to use a Cloud Drive (durable storage)
Copy your file to a blob (durable storage)
In production, you'll want to add exception-handling, and you can re-route stdout and stderr to be captured. But this sample code should be enough to get you started.
OOPS - one more 'one more thing': When adding your 'myapp.exe' to your project, be SURE to go to its Properties, and set 'Copy to Output Directory' to 'Copy Always' - otherwise your myapp.exe file won't end up in Windows Azure and you'll wonder why things don't work.
EDIT: Pushing results to a blob - a quick example
First get set up a storage account and add to your role's Settings. Say you called it 'AzureStorage' - now set it up in code, get a reference to a blob container, get a reference to a blob within that container, and then perform a file upload to the blob:
CloudStorageAccount storageAccount = CloudStorageAccount.FromConfigurationSetting("AzureStorage");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer outputfiles = blobClient.GetContainerReference("outputfiles");
outputfiles.CreateIfNotExist();
var blobname = "myoutput.txt";
var blob = outputfiles.GetBlobReference(blobname);
blob.UploadFile(outputFile);
In Azure land you shouldn't write to the file system. You should write to SQL Azure, Table Storage or most likely in this case Blob storage (basically, I think you should think of blob storage as the old file system)
This is because:
You could have multiple instances running and you will end up having different files on different instances (which are just virtual machines)
Your instance could potentially be moved at any moment and you would lose the info on the file system as it's not part of your deployment package.
Using one of the three storage options will provide a central repository for all of your instances to access and it will be persisted over a redeployment.