I am trying to create a azure batch service. While creating a pool, i am trying to give a starttask which should be run when the VM's are spinned up for the first time. After the pool is committed when i try to observe the progress on the Azure portal, state of the nodes appear as starttaskfailed. I could see the scheduling error inside the starttaskinfo. Error info is as given below.
CATEGORY - ServerError
CODE - BlobDownloadMiscError
MESSAGE - Miscellaneous error encountered while downloading one of the specified azure blob's.
Here I am trying to run the simple executable as a start task which is creating a container and writing a blob.
I have already tried to run the exe standalone from my machine, it performs the operation as expected.
But when I am trying to run the same thing as a start task, I am getting the aforementioned error.
P.S. I have already verified that all the paths and the required dependable(dll) are uploaded on to the blob.
Please help me in identifying the root cause of the problem. Even if i get to know the descriptive error message that would be of great help.
I am able to solve this issue now. I had provided wrong container name so was getting this error as it was not able to locate the files in the given location. But while running a starttask it was not giving me any meaningful error neither in portal nor in code where it doesn't even give error. So as rectify this issue i tried running this as job and added this in its task where it correctly notified the error in ExecutionInformation->schedulingerror property of cloudtask.
Related
when I try turn on the debug mode for dataflows I immediately get this response.
I have tried to create a new integrated runtime and still it fails instantly. I have tried this on two different accounts. Is there an error log somewhere where I am able to see why it failed
We have a release pipeline in Azure DevOps that deploys a database project to our Azure SQL Database via the Azure SQL Dacpac Task. Everything has been working fine but suddenly yesterday the pipeline started failing with the following error:
##[error]*** An error occurred during deployment plan generation. Deployment cannot continue.
##[error]Error SQL72018: Permission could not be imported but one or more of these objects exist in your source.
As far as I know, nothing has changed on the database side or in the pipeline. We also ruled out that it could be an issue with the specific dacpac file because previously successful releases now fail with the same error.
I searched extensively for the SQL72018 error, but didn't really find any answers as to what would be causing that so am wondering if there was some Azure DevOps task update or something that we could be missing?
Not sure what would have caused this to break out of nowhere like that.
It does work if we add the p:/ignorePermissions=true parameter to the task, but we have never needed that before this.
UPDATE:
Wanted to update this as I was able to gather a little more information by adding the /Diagnostics:True parameter to the pipeline task in order to print out Diagnostic info from SQLPackage.
When I added that, I also see this error:
Microsoft.Data.Tools.Diagnostics.Tracer Error: 1 : 2022-04-05T08:38:37 : Error detected when reverse engineering the database. Severity:'Warning' Prefix:'' Error Code:'0' Message:The permission 'VDP ' was not recognized and was not imported. If this problem persists, contact customer support.
So it looks like some "VDP" permission is what is causing the issue, but we don't know what that permission is for or where it came from as it's not in the database project.
We finally got to the bottom of this. It turns out a new permission was added to the database the day before the pipeline started to fail. The database permission that caused the issue was VIEW DATABASE PERFORMANCE STATE. That was the VDP permission that SQLPackage.exe was complaining about.
Unsure why that particular permission caused the error as we manage all of our database permissions outside of the database project, so not sure why other permissions wouldn't have caused issues prior to this one.
Since we are managing permissions outside of our database project, the resolution was to add the p:/ignorePermissions=true SQLPackage parameter to the pipeline permanently. This was confirmed as the appropriate solution by a Microsoft representative after we put a ticket in.
It seems you have a spurious/orphan permission in your target database - as mentioned in this post How could our database be causing SqlPackage to fail? (SQL72018).
We are trying to deploy file to remote server using CopyFilesOverSSH service Copy Files Over SSH task
in Azure DevOps through build pipeline but getting error which is "##[error]Unhandled: handle is not a Buffer" see below image (Error). Even though File has been uploaded on remote server with Zero bytes.
So we don't know why getting this error is there any permission issue on agent-job?
We would appreciate your help.
Error:
Received the same error message from a consistently smooth running pipeline as well. Turned out to be a disk space issue for us.
Please check if the folder is created on the target machine and what permissions it has. The issue may be related to folder permissions. Check the similar issue below:
https://github.com/Microsoft/azure-pipelines-tasks/issues/3190
I've just started working with AWS CodeDeploy.
My first few deployments have failed, which is fine. With new tools comes new learning, and I expected to have to iterate a bit initially. Each of my first few deployments has failed in a useful way.
In the AWS Console I see something like this:
Here I can see some useful details. I can click the View Events link to see even more details, and from there I can view logs on the target EC2 instance.
In contrast, my most recent failed deployment shows this:
As you can see, this is missing much of the detail from the previous screenshot. The missing View Events link is particularly unfortunate. It might be significant that this deployment took longer to fail, but not long enough that one of my hook scripts might have reached its timeout.
Re-deploying resulted in the same thing.
How should I go about troubleshooting this?
After trying this one more time while keeping an eye on /var/log/aws/codedeploy-agent/codedeploy-agent.log I realized that there was no new log activity being generated.
Restarting the agent with sudo /etc/init.d/codedeploy-agent restart and deploying again generated the output I expected.
When I try to test my Azure ML model, I get the following error: “Error code: InternalError, Http status code: 500”, so it appears something is failing inside of the machine learning service. How do I get around this error?
I've run into this error before, and unfortunately, the only workaround I found was to create a new ML workspace backed by a storage account that you know is online. Then copy your experiment over to the new workspace, and things should work. It can be a bit cumbersome, but it should get rid of your error message. With the service being relatively new, things sometimes get corrupted as updates are being made, so I recommend checking the box labeled "disable updates" within your experiment. Hope that helps!