I am new to Azure batch. I have to perform a task on nodes in the pool.
The approach I am using is that I have the code that I want to run on the node. I am making a zip of the jar of the .class file and uploading to my Azure storage account and then getting the application Id and putting it in ApplicationPackageReference and adding to my task in job.
String applicationId= "TaskPerformApplicationPack";
ApplicationPackageReference reference = new ApplicationPackageReference().withApplicationId(applicationId);
List<ApplicationPackageReference> list = new ArrayList<ApplicationPackageReference>();
list.add(reference);
TaskAddParameter taskToAdd = new TaskAddParameter().withId("mytask2").withApplicationPackageReferences(list);
taskToAdd.withCommandLine(String.format("java -jar task.jar"));
batchClient.taskOperations().createTask(jobId, taskToAdd);
Now when I run this, my task fails giving an error that
access for one of the specified Azure Blob(s) is denied
How can I run a particular code on a node using azure batch job tasks?
I think a good place to start is: (I have covered most of the helpful links along with the guided docs below, they will elaborate the use of environment level variable etc, also I have included few sample links as well.) hope material and sample below will help you. :)
Also I would recommend to recreate your pool if it is old which will ensure you have the node running at the latest version.
Azure batch learning path:
Samples & demo link or look here
Detailed walk through depending on what you are using i.e. CloudServiceConfiguration or VirtualMachineConfiguration link.
Further to add from the article: also look in here: Application Packages with VM configuration
In particular this link will take you through the guide process of how to use it in your code: Also be it resource file or package you need to make sure that they are uploaded and available for use at the batch level.
along with sample like: (below is a pool level pkg example)
// Create the unbound CloudPool
CloudPool myCloudPool =
batchClient.PoolOperations.CreatePool(
poolId: "myPool",
targetDedicatedComputeNodes: 1,
virtualMachineSize: "small",
cloudServiceConfiguration: new CloudServiceConfiguration(osFamily: "4"));
// Specify the application and version to install on the compute nodes
myCloudPool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference {
ApplicationId = "litware",
Version = "1.1" }
};
// Commit the pool so that it's created in the Batch service. As the nodes join
// the pool, the specified application package is installed on each.
await myCloudPool.CommitAsync();
For the Task level form the link above a sample is: (make sure you have followed the steps correctly mentioned here.
CloudTask task =
new CloudTask(
"litwaretask001",
"cmd /c %AZ_BATCH_APP_PACKAGE_LITWARE%\\litware.exe -args -here");
task.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "litware",
Version = "1.1"
}
};
further to add: be it CloudServiceCOhnfiguration or VirtualMachineConfiguration, An application package is **a .zip file** that contains the application binaries and supporting files that are required for your tasks to run the application. Each application package represents a specific version of the application. from reference: 4
I gave it a shot and tried and was successful, so I am not able to replicate the error above and seems like you might be missing something.
Related
We are trying to setup a workflow for delivering content model changes to our other environments (stage & prod).
Right now, our approach is this:
Create a new contentful field as a migration script using Contentful CLI.
Run script in local dev to make sure the result is as desired using contentful space migration migrations/2023-01-12-add-field.ts
Add script to GIT in folder migrations/[date]-[description].js
When release to prod, run all scripts in the migrations folder, in order, as part of the build process.
When folder contains "too many" scripts, and we are certain all changes are applied to all envs, manually remove the scripts from GIT and start over in an empty folder.
Where it fails
But, between point 4 & 5 there will be cases where a script has already been run in an earlier release, and that throws an error:
I would like the scripts to continue more gracefully without throwing an error, but I cant find support for it in the space migration docs. I have tried wrapping the code in try/catch without any luck.
Contentful recommends using the Content Migration API in favour for Content Management API since it is faster. I guess we could use the Content Management API, but at the same time we want to use "best practise".
Are we missing something here?
I would like to do something like:
module.exports = function (migration) {
// Create a new category field in the blog post content type.
const blogPost = migration.editContentType('blogPost')
if (blogPost.fieldExists('testField')) {
console.log('Field already exists')
} else {
blogPost.createField('testField').name('Test field').type('Symbol')
}
}
I have lots of run files created by running PyTorch estimator/ ScriptRunStep experiments that are saved in azureml blob storage container. Previously, I'd been viewing these runs in the Experiments tab of the ml.azure.com portal and associating tags to these runs to categorise and load the desired models.
However, a coworker recently deleted my workspace. I created a new one which is connected to the previously-existing blob container, the run files therefore still exist and can be accessed on this new workspace, but they no longer show up in the Experiment viewer on ml.azure.com. Neither can I see the tags I'd associated to the runs.
Is there any way to load these old run files into the Experiment viewer or is it only possible to view runs created inside the current workspace?
Sample scriptrunconfig code:
data_ref = DataReference(datastore=ds,
data_reference_name="<name>",
path_on_datastore = "<path>")
args = ['--data_dir', str(data_ref),
'--num_epochs', 30,
'--lr', 0.01,
'--classifier', 'int_ext' ]
src = ScriptRunConfig(source_directory='.',
arguments=args,
compute_target = compute_target,
environment = env,
script='train.py')
src.run_config.data_references = {data_ref.data_reference_name:
data_ref.to_config()}
Sorry for your loss! First, I'd make absolutely sure that you can't recover the deleted workspace. Definitely worthwhile to open an priority support ticket with Azure.
Another thing you might try is:
create a new workspace (which will create a new storage account for you for the new workspace's logs)
copy your old workspace's data into the new workspace's storage account.
I'm looking to use the Change Feed Processor SDK to monitor for changes to an Azure Cosmos DB collection, however, I have not seen clear documentation about whether the host can be run as an Azure Web Job. Can it? And if yes, are there any known issues or limitations versus running it as a Console App?
There are a good number of blog posts about using the CFP SDK, however, most of them vaguely mention running the host on a VM, and none of them or any examples running the host as an azure web job.
Even if it's possible, as a side question is, if such a host is deployed as a continuous web job and I select the "Scale" setting of the web job to Multi Instance, what are the approaches or recommendations to make the extra instances run with a different instance name, which the CFP SDK requires?
According to my research,Cosmos db trigger could be implemented in the WebJob SDK.
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddCosmosDB(a =>
{
a.ConnectionMode = ConnectionMode.Gateway;
a.Protocol = Protocol.Https;
a.LeaseOptions.LeasePrefix = "prefix1";
});
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
But it seems only Nuget for c# sdk could be used,no clues for other languages.So,you could refer to the Compare Functions and WebJobs to balance your needs and cost.
The Cosmos DB Trigger for Azure Functions it's actually, a WebJobs extension: https://github.com/Azure/azure-webjobs-sdk-extensions/tree/dev/src/WebJobs.Extensions.CosmosDB
And it uses the Change Feed Processor.
Functions run over WebJob technology. So to answer the question, yes, you can run Change Feed Processor on WebJobs, just make sure that:
Your App Service is set to Always On
If you plan to use multiple instances, make sure to set the InstanceName accordingly and not a static/fixed value. Probably something that identifies the WebJob instance.
What is the script to update deployment ( from GUI, we can do this update by unlock & save changes ) in linux. Is it possible to do this ? If not what is script to redeploy ?
As Kevin pointed out, WLST is the way to go. You should probably craft a script (named wlDeploy.py, for instance), with content like follows (import clauses were omitted for the sake of simplicity):
current_app_name = '[your current deployed app name]'
new_app_name = '[your new app name]'
target_name = '[WL managed server name (or AdminServer)]'
connect([username],[pwd],'t3://[admin server hostname/IP address]:[PORT]')
stopApplication(current_app_name)
undeploy(current_app_name, timeout=60000);
war_path = '[path to war file]'
deploy(appName=new_app_name, path=war_path, targets=target_name);
And call it via something like:
./wlst.sh wlDeploy.py
Of course you can add parameters to your script, and a lot of logic which is relevant to your deployment. This is entirely up to you. The example above, though, should help you getting started.
In WebLogic you can use wlst to perform administrative tasks like managing deployments. If you google weblogic wlst, you will receive tons of information. wlst runs on the python language.
Assuming you are using weblogic 10 you can also "Record" your actions. This will save the actions into a python script which you can "replay" (execute) later.
How do I create new instances of some role via C# using Azure emulator? Is there some guide about that? There are some manuals about creating instances in the cloud, not in emulator.
So far I know that:
I need to change config-file. Is it config in sln-file or in some temp-delpoyment folder?
I need to use csrun tool. How to pick params?
UPD
Got it.
To change count or instances on emulator, you have to:
update 'ServiceConfiguration.cscfg' file in bin-folder
run 'csrun' tool with params: string.Format("/update:{0};\"{1}\"", deploymentId, "<path to ServiceConfiguration.cscfg>")
where deploymentId:
// get id from RoleEnvironment with regex
var patternt = Regex.Escape("(") + #"\d+" + Regex.Escape(")");
var input = RoleEnvironment.DeploymentId;
var m = Regex.Match(input, patternt);
var deploymentId = m.ToString().Replace("(", string.Empty).Replace(")", string.Empty);
If you have troubles running csrun via code, read this:
http://social.msdn.microsoft.com/Forums/en/windowsazuredevelopment/thread/62ca1372-2388-4181-9dbd-8fbba470ea77
In local emulator, you need to modify the CSCFG file under the deployment .csx folder, instead of your source code folder, since the local emulator will fire your application up from that folder.
Once you modified the saved your CSCFG file, for example the count of the instances you can retrieve the new value from your code immediately. But if you want the local emulator detect this changes and perform the related actions, such as increase the VMs or invoke the Configuration_Changed method, you need to execute
csrun /update:;
You can retrieve the deployment id from the compute emulator UI.
You can find the instance count in the ServiceConfiguration.cscfg in your Azure project