Unable to host docker image from azure registry to azure batch - azure

I am new to docker as well as azure batch. The problem i am having currently is i have 2 dotnet console applications one of them runs locally (which creates the pool, job and task on azure batch programmatically) and for second one i have created a docker image and pushed to azure container registry. Now the things is when i create the cloudtTask from locally running application as monetione below
TaskContainerSettings cmdContainerSettings = new TaskContainerSettings(
imageName: "myrepository.azurecr.io/pipeline:latest",
containerRunOptions: "--rm"
);
CloudTask containerTask = new CloudTask(
id: "task1",
commandline: cmdLine);
containerTask.ContainerSettings = cmdContainerSettings;
Console.WriteLine("Task created");
await batchClient.JobOperations.AddTaskAsync(newJobId, containerTask);
Console.WriteLine("-----------------------");
and add it to the BatchClient, the expcetion i get in azure batch (Azure portal) is this:
System.UnauthorizedAccessException: Access to the path '/home/_azbatch/.dotnet' is denied. ---> System.IO.IOException: Permission denied
--- End of inner exception stack trace ---
What can be the problem? Thank you.

As the comment ended up being the answer, I'm posting it here for clarity for future viewers:
The task needs to be run with elevated rights.
eg.
containerTask.UserIdentity = new UserIdentity(new AutoUserSpecification(elevationLevel: ElevationLevel.Admin, scope: AutoUserScope.Task));
See the docs for more info

i am still not able to pull image from docker, i am using nodejs .. following are configs for creating task
const taskConfig = {
"id": "task-new-2",
"commandLine": "bash -c 'node index.js'",
"containerSettings": {
"imageName": "xxx.xx.io/xx-test:latest",
"containerRunOptions": "--rm",
"username": "xxx",
"password": "tfDlZ",
"registryServer": "xxx.xx.io",
// "workingDirectory": "AZ_BATCH_NODE_ROOT_DIR"
},
"userIdentity": {
"autoUser": {
"scope": "pool",
"elevationLevel": "admin"
}
}
}

Related

Dockerized Logic App dont work when container is running, but work in debug mode on VS Code

I am trying to put inside a docker image an Azure Logic Apps.
I was following some Microsoft tutorials:
This for creating the Logic Apps (is a bit outdate, but is mainly valid): https://microsoft.github.io/AzureTipsAndTricks/blog/tip304.html
And this for make the docker image: https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-run-logic-apps-in-a-docker/ba-p/3545220
The only diference between this tutorials and my poc is that I am using the node mode of the Logic Apps, instead of Net Core, and the dockerfile that I am using:
FROM mcr.microsoft.com/azure-functions/node:3.0
ENV AzureWebJobsStorage DefaultEndpointsProtocol=https;AccountName=logicappsexamples;AccountKey=AHaGR5SQZYdB2LgS2+pPbsFQO3eZDZ25T5EV3mcc1ZWJXOk7QTCKEpjDcyD6lp2J9MYo+c1OcpLu+ASt8aoEWg==;EndpointSuffix=core.windows.net
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true \
FUNCTIONS_V2_COMPATIBILITY_MODE=true
ENV WEBSITE_HOSTNAME localhost
ENV WEBSITE_SITE_NAME test
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot
The logic app is simple, just put a message in a queue when you call to an url. In debug mode on VSCode all work fine. But the problem come when run the dockerized logic app.
The problems come when I run the image. The logic App have to use a queue called "test", but when the container end to set up, it create a new queue:
[![enter image description here][1]][1]
An in the last step of the last tutorial (https://techcommunity.microsoft.com/t5/azure-developer-community-blog/azure-tips-and-tricks-how-to-run-logic-apps-in-a-docker/ba-p/3545220) when I call to the trigger url, I dont recibe anything in neither od the queue.
I have receibed this logs from the running container:
info: Host.Triggers.Workflows[206]
Workflow action ends.
flowName='Stateless1',
actionName='Put_a_message_on_a_queue_(V2)',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
flowRunSequenceId='08585288406768053370693349962CU00',
correlationId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16',
status='Failed',
statusCode='BadRequest',
error='', durationInMilliseconds='910',
inputsContentSize='-1',
outputsContentSize='-1',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
actionTrackingId='1439f372-4ce0-4709-b9ab-ee18db8839ae',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.9639703Z",
"endTime":"2023-01-03T17:16:49.8743232Z",
"status":"Failed",
"code":"BadRequest",
"executionClusterType":"Classic",
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"actionName":"Put_a_message_on_a_queue_(V2)"
},
"correlation":{
"actionTrackingId":"1439f372-4ce0-4709-b9ab-ee18db8839ae",
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
actionType='ApiConnection',
sequencerType='Linear',
flowScaleUnit='cu00',
platformOptions='RunDistributionAcrossPartitions, RepetitionsDistributionAcrossSequencers, RunIdTruncationForJobSequencerIdDisabled, RepetitionPreaggregationEnabled',
retryHistory='',
failureCause='',
overrideUsageConfigurationName='',
hostName='',
activityId='46860f56-96bc-462a-b9bc-3aed4ad6464c'.
info: Host.Triggers.Workflows[202]
Workflow run ends.
flowName='Stateless1',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
flowRunSequenceId='08585288406768053370693349962CU00',
correlationId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
status='Failed',
statusCode='ActionFailed',
error='{
"code":"ActionFailed",
"message":"An action failed. No dependent actions succeeded."
}',
durationInMilliseconds='1202',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.7228752Z",
"endTime":"2023-01-03T17:16:50.0174324Z",
"status":"Failed",
"code":"ActionFailed",
"executionClusterType":"Classic",
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"originRunId":"08585288406768053370693349962CU00"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"error":{
"code":"ActionFailed",
"message":"An action failed. No dependent actions succeeded."
}
}',
sequencerType='Linear',
flowScaleUnit='cu00',
platformOptions='RunDistributionAcrossPartitions, RepetitionsDistributionAcrossSequencers, RunIdTruncationForJobSequencerIdDisabled, RepetitionPreaggregationEnabled',
kind='Stateless',
runtimeOperationOptions='None',
usageConfigurationName='',
hostName='',
activityId='c6ca440e-fef5-457a-a4cb-9d5a3d806518'.
info: Host.Triggers.Workflows[203]
Workflow trigger starts.
flowName='Stateless1',
triggerName='manual',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
status='',
statusCode='',
error='',
durationInMilliseconds='-1',
flowRunSequenceId='08585288406768053370693349962CU00',
inputsContentSize='-1',
outputsContentSize='-1',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.6768387Z",
"status":"Succeeded",
"fired":true,
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"triggerName":"manual"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
triggerType='Request',
flowScaleUnit='cu00',
triggerKind='Http',
sourceTriggerHistoryName='',
failureCause='',
hostName='',
activityId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16'.
info: Host.Triggers.Workflows[204]
Workflow trigger ends.
flowName='Stateless1',
triggerName='manual',
flowId='2731d82fc1324e4fb0df69fd5c549d72',
flowSequenceId='08585288407219836302',
status='Succeeded',
statusCode='',
error='',
extensionVersion='1.2.18.1',
siteName='test',
slotName='',
durationInMilliseconds='1348',
flowRunSequenceId='08585288406768053370693349962CU00',
inputsContentSize='-1',
outputsContentSize='-1',
clientTrackingId='08585288406768053370693349962CU00',
properties='{
"$schema":"2016-06-01",
"startTime":"2023-01-03T17:16:48.6768387Z",
"endTime":"2023-01-03T17:16:50.0319177Z",
"status":"Succeeded",
"fired":true,
"resource":{
"workflowId":"2731d82fc1324e4fb0df69fd5c549d72",
"workflowName":"Stateless1",
"runId":"08585288406768053370693349962CU00",
"triggerName":"manual"
},
"correlation":{
"clientTrackingId":"08585288406768053370693349962CU00"
},
"api":{}
}',
triggerType='Request',
flowScaleUnit='cu00',
triggerKind='Http',
sourceTriggerHistoryName='',
failureCause='',
overrideUsageConfigurationName='',
hostName='',
activityId='ebf4e18e-405f-41c8-bb6a-bdf84d4a7a16'.
info: Function.Stateless1[2]
Executed 'Functions.Stateless1' (Succeeded, Id=10af0f54-765a-4954-a42f-373ceb58c94b, Duration=1545ms)
So, what I am doing bad? I mean, It look like as the info to de queue name (test) is not property passed to the docker imagen and for this reason, the docker image create a new one... but how I can fix it?
I would greatly appreciate any help... I cant find anything clear in internet.
Thanks!

Net Core application not reading ASPNETCORE_ENVIRONMENT value?

I deployed an ASP.NET Core 7 application to Linux Web Application in Azure.
When I access the URL I get an Application Error and the Logs shows:
System.IO.FileNotFoundException:
The configuration file 'settings..json' was not found and is not optional.
It seems it is missing the Environment value so it should be:
settings.production.json
In the Azure Application Service Configuration I have:
[
{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "production",
"slotSetting": false
}
]
And the application Program.cs code is:
Serilog.Log.Logger = new
Serilog.LoggerConfiguration()
.WriteTo.Console(LogEventLevel.Verbose)
.CreateBootstrapLogger();
try {
Serilog.Log.Information("Starting up");
WebApplicationBuilder builder = WebApplication.CreateBuilder(new WebApplicationOptions {
Args = args,
WebRootPath = "webroot"
});
builder.Configuration
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("settings.json", false, true)
.AddJsonFile($"settings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json", false, true)
.AddEnvironmentVariables();
// Remaining code
Am I doing something wrong or something change in Net 7?
In short, this problem occurs because the settings.production.json file was not included at the time of release.
We can verify this by uploading the 'settings.production.json' file to the scm site. The URL is https://your_appname_azurewebsites.net/newui .
Solution:
Official doc : Include files
Sample: Use ResolvedFileToPublish in ItemGroup

I am unable to connect Mongodb atlas Cluster from node js getting following unable to connect DB error

{ error: 1, message: 'Command failed: mongodump -h cluster0.yckk6.mongodb.net --port=27017 -d databaseName -p -u --gzip --archive=/tmp/file_name_2022-09-19T09-42-05.gz\n' + '2022-09-19T14:42:08.931+0000\tFailed: error connecting to db server: no reachable servers\n' }
Can anyone help me to solve this problem and following is my backup code
function databaseBackup() {
let backupConfig = {
mongodb: "mongodb+srv://<username>:<password>#cluster0.yckk6.mongodb.net:27017/databaseName?
retryWrites=true&w=majority&authMechanism=SCRAM-SHA-1", // MongoDB Connection URI
s3: {
accessKey: "SDETGGAKIA2GL", //AccessKey
secretKey: "Asad23rdfdg2teE8lOS3JWgdfgfdgfg", //SecretKey
region: "ap-south-1", //S3 Bucket Region
accessPerm: "private", //S3 Bucket Privacy, Since, You'll be storing Database, Private is HIGHLY Recommended
bucketName: "backupDatabase" //Bucket Name
},
keepLocalBackups: false, //If true, It'll create a folder in project root with database's name and store backups in it and if it's false, It'll use temporary directory of OS
noOfLocalBackups: 5, //This will only keep the most recent 5 backups and delete all older backups from local backup directory
timezoneOffset: 300 //Timezone, It is assumed to be in hours if less than 16 and in minutes otherwise
}
MBackup(backupConfig).then(onResolve => {
// When everything was successful
console.log(onResolve);
}).catch(onReject => {
// When Anything goes wrong!
console.log(onReject);
});
}

how to write the correct pipline jenkins docker grovy node

I am rewriting my pipline in node, I need to understand how to perform a step with a gait in node now an error is coming from stage('Deploy')
node {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
stage('Deploy') {
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
and this is the old pipeline
pipeline {
agent {label "slave_first"}
stages {
stage("Создание контейнера image") {
steps {
catchError {
script {
docker.build("python-web-tests:${env.BUILD_ID}", "-f Dockerfile .")
}
}
}
}
stage("Running and debugging the test") {
steps {
sh 'ls'
sh 'docker run --rm -e REGION=${REGION} -e DATA=${DATA} -e BUILD_DESCRIPTION=${BUILD_URL} -v ${WORKSPACE}:/tmp python-web-tests:${BUILD_ID} /bin/bash -c "python ${CMD_PARAMS} || exit_code=$?; chmod -R 777 /tmp; exit $exit_code"'
}
}
}
post {
always {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]
])
cleanWs()
}
}
}
I tried to transfer the method of creating an allure report, but nothing worked, I use the version above, almost everything turned out, you can still add environment variables to the build, for example, those that are specified -e DATA=${DATA} how do I add it
I don't recommend to switch from declarative to scriptive pipeline.
You are losing possibility to use multiple tooling connected with declarative approach like syntax checkers.
If you still want to use scriptive approach try this:
node('slave_first') {
stage('Build') {
checkout scm
def customImage = docker.build("python-web-tests:${env.BUILD_ID}")
customImage.inside {
sh "python ${env.CMD_PARAMS}"
}
}
stage('Deploy') {
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'report']]])
cleanWs()
}
}
There is no post and always directive in scriptive pipelines. It's on your head to catch all exceptions and set status of the job. I guess you were using this page: https://www.jenkins.io/doc/book/pipeline/syntax/, but it's a mistake.
This page only refers to declarative approach and in few cases you have hidden scriptive code as examples.
Also i don't know if you have default agent label set in your Jenkins config, but by looking at your declarative one I think you missed 'slave_first' arg in node object.
those that are specified -e DATA=${DATA} how do I add it
That's a docker question not a Jenkins. If you want to launch docker image and then also have access to some reports located in this container you should mount workspace/file where those output files landed. You should also pass location of those files to allure.
I suggest you to try this:
mount some subfolder in workspace to docker container
cat test report file if it's visible
add allure report with passing this file location to allure step

Azure container fails to configure and then 'terminated'

I have a Docker container with an ASP.NET (.NET 4.7) web application. The Docker image works perfectly using our local docker deployment, but will not start on Azure and I cannot find any information or diagnostics on why that might be.
From the log stream I get
31/05/2019 11:05:34.487 INFO - Site: ip-app-develop-1 - Creating container for image: 3tcsoftwaredockerdevelop.azurecr.io/irs-plus-app:latest-develop.
31/05/2019 11:05:34.516 INFO - Site: ip-app-develop-1 - Create container for image: 3tcsoftwaredockerdevelop.azurecr.io/irs-plus-app:latest-develop succeeded. Container Id 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9
31/05/2019 11:05:40.151 INFO - Site: ip-app-develop-1 - Start container succeeded. Container: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9
31/05/2019 11:05:43.745 INFO - Site: ip-app-develop-1 - Application Logging (Filesystem): On
31/05/2019 11:05:44.919 INFO - Site: ip-app-develop-1 - Container ready
31/05/2019 11:05:44.919 INFO - Site: ip-app-develop-1 - Configuring container
31/05/2019 11:05:57.448 ERROR - Site: ip-app-develop-1 - Error configuring container
31/05/2019 11:06:02.455 INFO - Site: ip-app-develop-1 - Container has exited
31/05/2019 11:06:02.456 ERROR - Site: ip-app-develop-1 - Container customization failed
31/05/2019 11:06:02.470 INFO - Site: ip-app-develop-1 - Purging pending logs after stopping container
31/05/2019 11:06:02.456 INFO - Site: ip-app-develop-1 - Attempting to stop container: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9
31/05/2019 11:06:02.470 INFO - Site: ip-app-develop-1 - Container stopped successfully. Container Id: 1ea16ee9f5f128f14246fefcd936705bb8a655dc6cdbce184fb11970ef7b1cc9
31/05/2019 11:06:02.484 INFO - Site: ip-app-develop-1 - Purging after container failed to start
After several restart attempts (manual or as a result of re-configuration) I will simply get:
2019-05-31T10:33:46 The application was terminated.
The application then refuses to even attempt to start regardless of whether I use the az cli or the portal.
My current logging configuration is:
{
"applicationLogs": {
"azureBlobStorage": {
"level": "Off",
"retentionInDays": null,
"sasUrl": null
},
"azureTableStorage": {
"level": "Off",
"sasUrl": null
},
"fileSystem": {
"level": "Verbose"
}
},
"detailedErrorMessages": {
"enabled": true
},
"failedRequestsTracing": {
"enabled": false
},
"httpLogs": {
"azureBlobStorage": {
"enabled": false,
"retentionInDays": 2,
"sasUrl": null
},
"fileSystem": {
"enabled": true,
"retentionInDays": 2,
"retentionInMb": 35
}
},
"id": "/subscriptions/XXX/resourceGroups/XXX/providers/Microsoft.Web/sites/XXX/config/logs",
"kind": null,
"location": "North Europe",
"name": "logs",
"resourceGroup": "XXX",
"type": "Microsoft.Web/sites/config"
}
Further info on the app:
- deployed using a docker container
- docker base image mcr.microsoft.com/dotnet/framework/aspnet:4.7.2
- image entrypoint c:\ServiceMonitor.exe w3svc
- app developed in ASP.NET 4.7
- using IIS as a web server
Questions:
How can I get some diagnostics on what is going on to enable me to determine why the app is not starting?
Why does the app refuse to even attempt to restart after a few failed attempts?
We have had the same issue, at last we have seen that Appservice is mounting a directory with a "special cooked" version of servicemonitor.exe this version reads the events from the backend of the appservice. If you change your docker image for use this version of service monitor will work. We have created a small powershell and changed the entrypoint from this:
#WORKDIR /LogMonitor
SHELL ["C:\\LogMonitor\\LogMonitor.exe", "powershell.exe"]
# Start IIS Remote Management and monitor IIS
ENTRYPOINT Start-Service WMSVC; C:/ServiceMonitor.exe w3svc;
to this:
ENTRYPOINT ["C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe","-File","C:\\start-iis-environment.ps1"]
and we have created this powershell like:
if (Test-Path -Path 'C:\AppService\Util\ServiceMonitor.exe' -PathType Leaf) {
& C:\AppService\Util\ServiceMonitor.exe w3svc
}
else{
& C:\ServiceMonitor.exe w3svc
}

Resources