Azure batch job task not running on the specified pool? - azure

I created an azure batch pool with custom image. Now when I create a job from the pool and create a task from the job and run it, the task fails because it does not find the (python) dependency package I preinstalled through the custom image; moreover in the task's overview, under 'Pool' it says 'n/a'. So does that mean the pool that I created from custom image is not being used by the task, and therefore my packages are missing?
By the way, if I log on to a node in the pool, I can see my packages do exist on the nodes.
Thanks.

Using a custom image saves time in preparing your pool's compute nodes
to run your Batch workload. While you can use an Azure Marketplace
image and install software on each compute node after provisioning,
using a custom image might be more efficient.
With the description of the document, the difference between the custom image and Azure Marketplace image is just that you installed the applications indeed in your custom image. So you can run tasks in the pool nodes no matter which image you have used.
For the issues you met, I think it's the user issue which you use to run the tasks. The user permission will limit the use of the application in the nodes. For more details, see Run tasks under user accounts in Batch. I suggest you can try to use an admin user to run your tasks. Hope this will be helpful.

Related

Azure Automation Use Case

I have a certain script (python), which needs to be automated that is relatively memory and CPU intensive. For a monthly process, it runs ~300 times, and each time it takes somewhere from 10-24 hours to complete, based on input. It takes certain (csv) file(s) as input and produces certain file(s) as output, after processing of course. And btw, each run is independent.
We need to use configs and be able to pass command line arguments to the script. Certain imports, which are not default python packages, need to be installed as well (requirements.txt). Also, need to take care of logging pipeline (EFK) setup (as ES-K can be centralised, but where to keep log files and fluentd config?)
Last bit is monitoring - will we be able to restart in case of unexpected closure?
Best way to automate this, tools and technologies?
My thoughts
Create a docker image of the whole setup (python script, fluent-d config, python packages etc.). Now we somehow auto deploy this image (on a VM (or something else?)), execute the python process, save the output (files) to some central location (datalake, eg) and destroy the instance upon successful completion of process.
So, is what I'm thinking possible in Azure? If it is, what are the cloud components I need to explore -- answer to my somehows and somethings? If not, what is probably the best solution for my use case?
Any lead would be much appreciated. Thanks.
Normally for short living jobs I'd say use an Azure Function. Thing is, they have a maximum runtime of 10 minutes unless you put them on an App Service Plan. But that will costs more unless you manually stop/start the app service plan.
If you can containerize the whole thing I recommend using Azure Container Instances because you then only pay for what you actual use. You can use an Azure Function to start the container, based on an http request, timer or something like that.
You can set a restart policy to indicate what should happen in case of unexpected failures, see the docs.
Configuration can be passed from the Azure Function to the container instance or you could leverage the Azure App Configuration service.
Though I don't know all the details, this sounds like a good candidate for Azure Batch. There is no additional charge for using Batch. You only pay for the underlying resources consumed, such as the virtual machines, storage, and networking. Batch works well with intrinsically parallel (also known as "embarrassingly parallel") workloads.
The following high-level workflow is typical of nearly all applications and services that use the Batch service for processing parallel workloads:
Basic Workflow
Upload the data files that you want to process to an Azure Storage account. Batch includes built-in support for accessing Azure Blob storage, and your tasks can download these files to compute nodes when the tasks are run.
Upload the application files that your tasks will run. These files can be binaries or scripts and their dependencies, and are executed by the tasks in your jobs. Your tasks can download these files from your Storage account, or you can use the application packages feature of Batch for application management and deployment.
Create a pool of compute nodes. When you create a pool, you specify the number of compute nodes for the pool, their size, and the operating system. When each task in your job runs, it's assigned to execute on one of the nodes in your pool.
Create a job. A job manages a collection of tasks. You associate each job to a specific pool where that job's tasks will run.
Add tasks to the job. Each task runs the application or script that you uploaded to process the data files it downloads from your Storage account. As each task completes, it can upload its output to Azure Storage.
Monitor job progress and retrieve the task output from Azure Storage.
(source)
I would go with Azure Devops and a custom agent pool. This agent pool could include some virtual machines (maybe only one) with docker installed. I would then install all the necessary packages that you mentioned on this docker container and also the DevOps agent (it will be needed to communicate with the agent pool).
You could pass every parameter needed in the build container agents through Azure Devops tasks and also have a common storage layer for build and release pipeline. This way you could mamipulate/process your files on the build pipeline and then using the same folder create a task on the release pipeline to export/upload those files somewhere.
As this script should run many times through the month, you could have many containers so that to run more than one job at a given time.
I follow the same procedure for a corporate environment. I keep a VM running windows with multiple docker machines to compile diferent code frameworks. Each container includes different tools and is registered to a custom agent pool. Jobs are distributed across those containers and build and release pipelines integrate with multiple processing.
You probably suppose to use Azure Data Factory for moving and transforming data.
Then you can also use ADF for calling Azure Batch that will be using python.
https://learn.microsoft.com/en-us/azure/batch/tutorial-run-python-batch-azure-data-factory
Adding more info could probably suggest other better suggestions.

Azure web app deployment using vscode is faster than devops pipeline

Currently, I am working on Django based project which is deployed in the azure app service. While deploying into the azure app service there were two options, one via using DevOps and another via vscode plugin. Both the scenario is working fine, but strangle while deploying into app service via DevOps is slower than vscode deployment. Usually, via DevOps, it takes around 17-18 minutes whereas via vscode it takes less than 14 min.
Any reason behind this.
Assuming you're using Microsoft hosted build agents, the following statements are true:
With Microsoft-hosted agents, maintenance and upgrades are taken care of for you. Each time you run a pipeline, you get a fresh virtual machine. The virtual machine is discarded after one use.
and
Parallel jobs represents the number of jobs you can run at the same time in your organization. If your organization has a single parallel job, you can run a single job at a time in your organization, with any additional concurrent jobs being queued until the first job completes. To run two jobs at the same time, you need two parallel jobs.
Microsoft provides a free tier of service by default in every organization that includes at least one parallel job. Depending on the number of concurrent pipelines you need to run, you might need more parallel jobs to use multiple Microsoft-hosted or self-hosted agents at the same time.
This first statement might cause an Azure Pipeline to be slower because it does not have any cached information about your project. If you're only talking about deploying, the pipeline first needs to download (and extract?) an artifact to be able to deploy it. If you're also building, it might need to bring in the entire source code and/or external packages before being able to build.
The second statement might make it slower because there might be less parallelization possible than on the local machine.
Next to these two possible reasons, the agents will most probably not have the specs of your development machine, causing them to run tasks slower than they can on your local machine.
You could look into hosting your own agents to eliminate these possible reasons.
Do self-hosted agents have any performance advantages over Microsoft-hosted agents?
In many cases, yes. Specifically:
If you use a self-hosted agent, you can run incremental builds. For example, if you define a pipeline that does not clean the repo and does not perform a clean build, your builds will typically run faster. When you use a Microsoft-hosted agent, you don't get these benefits because the agent is destroyed after the build or release pipeline is completed.
A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an agent to be allocated depending on the load on our system.
More information: Azure Pipelines Agents
When you deploy via DevOps pipeline. you will go through a lot more steps. See below:
Process the pipeline-->Request Agents(wait for an available agent to be allocated to run the jobs)-->Downloads all the tasks needed to run the job-->Run each step in the job(Download source code, restore, build, publish, deploy,etc.).
If you deploy the project in the release pipeline. Above process will need to be repeated again in the release pipeline.
You can check the document Pipeline run sequence for more information.
However, when you deploy via vscode plugin. Your project will get restored, built on your local machine, and then it will be deployed to azure web app directly from your local machine. So we can see deploying via vscode plugin is faster, since much less steps are needed.

Jenkins: Queue jobs if there are available Azure VM

Situation:
I have a pipeline job that executes tests in parallel. I use Azure VMs that I start/stop on each build of the job thru Powershell. Before I run the job, it checks if there are available VMs on azure (offline VMs) then use that VMs for that build. If there is no available VMs then I will fail the job. Now, one of my requirements is that instead of failing the build, I need to queue the job until one of the nodes is offline/available then use those nodes.
Problem:
Is there any way for me to this? Any existing plugin or a build wrapper that will allow me to queue the job based on the status of the nodes? I was forced to do this because we need to stop the Azure VM to lessen the cost usage.
As of the moment, I am still researching if this is possible or any other way for me to achieve this. I am thinking of any groovy script that will check the nodes and if there are no available, I will manually add it to the build queue until at least 1 is available. The closest plugin that I got is Run Condition plugin but I think this will not work.
I am open to any approach that will help me achieve this. Thanks

AWS ECS Updating service wipes mongo container

I have a node/mongo app deployed using ECS. The running task contains two containers, one for my node api, and another for my mongo database. When I push changes I make to my api and create a new task revision + update my service using it, it deploys the changes, but wipes my database clean every time.
With this setup, is updating a service always going to deploy a new
mongo container?
Any chance I could revert to the previous state of that service?
Would it be better to create a separate task for each container
Any help would be greatly appreciated
yes, it will always deploy a new container from the image that is mentioned in the task definition.
When UpdateService stops a task during a deployment, the equivalent of
docker stop is issued to the containers running in the task. This
results in a SIGTERM and a 30-second timeout.
update-service
No, When service updated it automatically remove the old container and image. because By default, the Amazon ECS container agent automatically cleans up stopped tasks and Docker images that are not being used by any tasks on your container instances.
You can control this behaviour using ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
This variable specifies the time to wait before removing any
containers that belong to stopped tasks. The image cleanup process
cannot delete an image as long as there is a container that references
it. After images are not referenced by any containers (either stopped
or running), then the image becomes a candidate for cleanup. By
default, this parameter is set to 3 hours but you can reduce this
period to as low as 1 minute, if you need to for your application.
Yes, It would be better to have a separate task definition. Also, I would recommend mounting in the DB container to avoid such losses in future.
AWS ecs docker-volumes

When using Azure Batch Processing, what is the best way to create and use a configuration file which can change per instance?

So I'm new to Azure and currently working on a project in which I will be using azure batch processing to run an application in several instances with different configurations.
I was wondering what is the best practice for doing this, with reference to how easy it is to change the configuration files, to deploy, how to interlink them with source control etc.
Any thoughts/knowledge would be helpful as I can't seem to find much based Azure batch and configuration files.
You would manage configuration in the batch client which is a normal application running on the client outside of batch job. This application creates pools, jobs and tasks i.e. sends them to Batch Queue. You can store configuration for this client in a usual way you are used to (app.config, json files etc.).
Scheduling a job to run in batch involves specifying job parameters like pool id, task id, resource files etc. and command line for an executable to run. This is where you pass required parameters for a task instance to use.

Resources