I have a selenium script that get triggered in an azure pipeline to test some web pages if they are working. The script get triggered every hour in an azure pipeline, but the weird thing is that this script randomly, at least twice a day, it fails because it doesn't find an element. I do believe that this might happen because the pipeline worker is not fast enough to load the pages.
So I was wondering, if there is a way how can I solve this issue as for now the script when it fails its returning a false positive and I would like to avoid this.
thank you so much for any help or advice you can offer
To wait until the page is fully loaded, you can check similar ticket for the details.
In addition, for azure devops pipeline, to make it's more stable, you can setup self-hosted agent for the selenium test.
Related
I have a simple pipeline in ADF that is triggered by a Logic App every time someone submits a file as response in a Microsoft forms. The pipeline creates a cluster based in a Docker and then uses a Databricks notebook to run some calculations that can take several minutes.
The problem is that every time the pipeline is running and someone submits a new response to the forms, it triggers another pipeline run that, for some reason, will make the previous runs to fail.
The last pipeline will always work fine, but earlier runs will show this error:
> Operation on target "notebook" failed: Cluster 0202-171614-fxvtfurn does not exist
However, checking the parameters of the last pipeline it uses a different cluster id, 0202-171917-e616dsng for example.
It seems that for some reason, the computers resources for the first run are relocated in order to be used for the new pipeline run. However, the IDs of the cluster are different.
I have set up the concurrency up to 5 in the pipeline general settings tab, but still getting the same error.
Concurrency setup screenshot
Also, in the first connector that looks up for the docker image files I have the concurrency set up to 15, but this won’t fix the issue
look up concurrency screenshot
To me, it seems a very simple and common task when it comes to automation and data workflows, but I cannot figure it out.
I really appreciate any help and suggestions, thanks in advance
The best way would be use an existing pool rather than recreating the pool everytime
I have a VM machine that i would like to shutdown/power off at a certain time and then restart at a certain time. I have tried this in task scheduler and obviously i can shutdown at a given time but cant then set the restart time
I would like the VM machine to shutdown at 10pm and restart at 5am and run a task scheduler task i have that restarts key services (that side of it works)
I have played around with automation tasks within azure but run into a variety of RMLogin issues
i just want the simplest way to schedule this
there is no auto-startup as far as I'm aware, so you'd have to use some sort of Automation. There is an official solution from Microsoft. Which is somewhat overkill, but should work (never tried it, tbh). There are various other scripts online that work with Azure Automation. They are easily searchable (like so).
If you go to my blog you can also find an example script that does the same, and an example of a runbook that you can trigger manually to start\stop vms
I would assume you would have gone through the below mentioned suggestion, The automation book https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management is the way to achieve this. You can achieve auto shutdown via the portal but not restart and start.
Please check this links that talks about Start and Shut down role of the VM through REST API. You can wire up the end point with Azure Function, Puppet, or Chef to automate this process
VM - Start/Shut down Role(s): https://learn.microsoft.com/en-us/previous-versions/azure/reference/jj157189(v=azure.100)
If anything doesn't work for you I would suggest to leave your feedback.
So to simply answer your question, no, there is not a more simple way to achieve this.
If you want, you can add your feedback for this feature suggestion here
https://feedback.azure.com/forums/34192--general-feedback
I have a published AzureML experiment and now I want to schedule that experiment periodically and also want to give flexibility to admin to run that experiment when he wants.
I tried running experiment periodically using Logic App azure service but getting an error "This session has timed out. To see the latest run status, navigate to the runs history blade.". Can anyone help me out?
I'm looking for what I would assume is quite a standard solution: I have a node app that doesn't do any web-work - simply runs and outputs to a console, and ends. I want to host it, preferably on Azure, and have it run once a day - ideally also logging output or sending me the output.
The only solution I can find is to create a VM on Azure, and set a cron job - then I need to either go fetch the debug logs daily, or write node code to email me the output. Anything more efficient available?
Azure Functions would be worth investigating. It can be timer triggered and would avoid the overhead of a VM.
Also I would investigate Azure Container Instances, this is a good match for their use case. You can have a container image that you run on an ACI instance that has your Node app. https://learn.microsoft.com/en-us/azure/container-instances/container-instances-tutorial-deploy-app
Is there any existing tooling/platform I can use to do the following?
On any github PR or commit, have a custom "check", e.g the same as how travis-ci works.
Have this task talk to a remote machine on azure.
Execute a script on this machine and collect logs/exit code
Fail the check if the code is none zero or timeout is reached.
Handle queuing if two PR's come in, clean up on abort etc.
Have some sort of "status" badge like travis-ci to see the current test state/pass rate.
So far only travis-ci itself seems to work something like this, but whatever I execute will run in their cloud so I don't "own" the machine. Additionally my integration tests require copyrighted data which needs to be kept safe on my own cloud machine, and could take multiple hours to complete.
Yes you can. https://help.github.com/articles/about-webhooks/ describes how to do this. Your machine will need to be accessible to github to do this.