How to pause azure pipeline execution for several seconds? - azure

I am using Maria DB docker image for integration tests. I start it in an Azure pipeline via the following commands:
docker pull <some_azure_repository>/databasedump:<tag_number>
docker run -d --publish 3306:3306 <some_azure_repository>/databasedump:<tag_number>
And after that integration tests, written in Python, are started.
But when the code tries to connect to the Maria DB database, mysql error is returned.
+ 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
Maybe the reason for that is, that the Maria DB database is big and it needs some seconds to be started.
So my question is whether there is a way to set a sleep of several second in a pipeline execution? In a script or bash section.

You can build a delay step into your pipelines yaml file between the setup of your docker image and your test execution.
# Delay v1
# Delay further execution of a workflow by a fixed time.
- task: Delay#1
inputs:
delayForMinutes: '0' # string. Required. Delay Time (minutes). Default: 0.
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/delay-v1?view=azure-pipelines

Related

prefect.io kubernetes agent and task execution

While reading kubernetes agent documentation, I am getting confused with below line
"Configure a flow-run to run as a Kubernetes Job."
Does it mean that the process which is incharge of submitting flow and communication with api server will run as kubernetes job?
On the other side, the use case which I am trying to solve is
Setup backend server
Execute a flow composed of 2 tasks
if k8s infra available the tasks should be executed as kubernetes jobs
if docker only infra available, the tasks should be executed as docker containers.
Can somebody suggest me, how to solve above scenario in prefect.io?
That's exactly right. When you use KubernetesAgent, Prefect deploys your flow runs as Kubernetes jobs.
For #1 - you can do that in your agent YAML file as follows:
env:
- name: PREFECT__CLOUD__AGENT__AUTH_TOKEN
value: ''
- name: PREFECT__CLOUD__API
value: "http://some_ip:4200/graphql" # paste your GraphQL Server endpoint here
- name: PREFECT__BACKEND
value: server
#2 - write your flow
#3 and #4 - this is more challenging to do in Prefect, as there is currently no load balancing mechanism aware of your infrastructure. There are some hacky solutions that you may try, but there is no first-class way to handle this in Prefect.
One hack would be: you build a parent flow that checks your infrastructure resources and depending on the outcome, it spins up your flow run with either DockerRun or KubernetesRun run config.
from prefect import Flow, task, case
from prefect.tasks.prefect import create_flow_run, wait_for_flow_run
from prefect.run_configs import DockerRun, KubernetesRun
#task
def check_the_infrastructure():
return "kubernetes"
with Flow("parent_flow") as flow:
infra = check_the_infrastructure()
with case(infra, "kubernetes"):
child_flow_run_id = create_flow_run(
flow_name="child_flow_name", run_config=KubernetesRun()
)
k8_child_flowrunview = wait_for_flow_run(
child_flow_run_id, raise_final_state=True, stream_logs=True
)
with case(infra, "docker"):
child_flow_run_id = create_flow_run(
flow_name="child_flow_name", run_config=DockerRun()
)
docker_child_flowrunview = wait_for_flow_run(
child_flow_run_id, raise_final_state=True, stream_logs=True
)
But note that this would require you to have 2 agents: Kubernetes agent and Docker agent running at all times

sam local invoke timeout on newly created project (created via sam init)

I create a new project via sam init and I select the options:
1 - AWS Quick Start Templates
1 - nodejs14.x
8 - Quick Start: Web Backend
Then from inside the project root, I run sam local invoke -e ./events/event-get-all-items.json getAllItemsFunction, which returns:
Invoking src/handlers/get-all-items.getAllItemsHandler (nodejs14.x)
Skip pulling image and use local one: public.ecr.aws/sam/emulation-nodejs14.x:rapid-1.32.0.
Mounting /home/rob/code/sam-app-2/.aws-sam/build/getAllItemsFunction as /var/task:ro,delegated inside runtime container
Function 'getAllItemsFunction' timed out after 100 seconds
No response from invoke container for getAllItemsFunction
Any idea what could be going on or how to debug this? Thanks.
Any chance the image/lambda make a call to a database someplace? and does the container running the lambda have the right connection string and/or access? To me sounds like your function is getting called and then function is trying to reach something that it can't reach.
As far as debugging - lots of console.log() statements to narrow down how far your code is getting before it runs into trouble.

Azure Datafactory Pipeline Failed inside a scheduled trigger

I have created 2 pipeline in Azure Datafactory. We have a custom activity created to run a python script inside the pipeline.When the pipeline is executed manually it successfully run for n number of time.But i have created a scheduled trigger of an interval of 15 minutes in order to run the 2 pipelines.The first execution successfully runs but in the next interval i am getting the error "Operation on target PyScript failed: Hit unexpected exception and execution failed." we are blocked wiht this.any input on this would be really helpful.
from ADF troubleshooting guide, it states...
Custom Activity :
The following table applies to Azure Batch.
Error code: 2500
Message: Hit unexpected exception and execution failed.
Cause: Can't launch command, or the program returned an error code.
Recommendation: Ensure that the executable file exists. If the program started, make sure stdout.txt and stderr.txt were uploaded to the storage account. It's a good practice to emit copious logs in your code for debugging.
Related helpful doc: Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Hope this helps.
If you are still blocked, please share failed pipeline run ID & failed activity run ID, for further analysis.

Openshift 3 App Deployment Failed: Took longer than 600 seconds to become ready

I have a problem with my openshift 3 setup, based on Node.js + MongoDB (Persistent) https://github.com/openshift/nodejs-ex.git
Latest App Deployment: nodejs-mongo-persistent-7: Failed
--> Scaling nodejs-mongo-persistent-7 to 1
--> Waiting up to 10m0s for pods in rc nodejs-mongo-persistent-7 to become ready
error: update acceptor rejected nodejs-mongo-persistent-7: pods for rc "nodejs-mongo-persistent-7" took longer than 600 seconds to become ready
Latest Build: Complete
Pushing image 172.30.254.23:5000/husk/nodejs-mongo-persistent:latest ...
Pushed 5/6 layers, 84% complete
Pushed 6/6 layers, 100% complete
Push successful
I have no idea how to debug this? Can you help please.
Check what went wrong in console: oc get events
Failed to pull image? Make sure you included a proper secret

How can I know how long a Jenkins job has been in the wait queue after the job is finished?

For the statistics, I want to see how long the job is in the waiting queue, therefore I can tune the system to make the job is run in time.
If the job is just in queue, it is possible to find in waiting queue in front page see How can I tell how long a Jenkins job has been in the wait queue?
Or the http://<jenkins_url>/queue/api/json?pretty=true
Is it possible to check somewhere to get "Time waiting in queue" for the specific job after the job is finished ?
Will be nice if it can be gotten in public jenkins API.
// got answer from colleague
It can be achieved by installing Jenkins Metrics Plugin, after it is installed, in the build result page, you will see
Jenkins REST API: Then you can get wait time in queue from http://localhost:8080/job/demo/1/api/json?pretty=true&depth=2 . queuingDurationMillis is the data I wanted.
"actions" : [
{
"queuingDurationMillis" : 33,
"totalDurationMillis" : 3067
}
],
Groovy script: Also we can get this data in groovy via internal data, check below code in Jenkins Script console http://localhost:8080/script
job = hudson.model.Hudson.instance.getItem("demo")
build = job.getLastBuild()
action = build.getAction(jenkins.metrics.impl.TimeInQueueAction.class)
println action.getQueuingDurationMillis()
You can see the demo using docker by running below and open in browser for demo job
docker run -it -p 8080:8080 larrycai/jenkins-metrics

Resources