Azure File Copy failing on second run in build pipeline - azure

I am using the "Azure File Copy" task in Azure Devops, which as far as I can see, uses an Az copy command to copy a file into Azure storage.
Here's my task definition for this:
[Note - this is v3 of the task]
This works fine on first run of the task within a build pipeline, and creates the file in the container as expected (shown below):
When I run the task in the pipeline subsequent times, it fails. I can see from the error it seems to be prompting for overwrite options - Yes/No/All. See below:
My Question:
Does anyone know how I give the task arguments that will tell it to force overwrite each time? Documentation for this on the Microsoft website isn't great and I can't find an example on the Github repo.
Thanks in advance for any pointers!
Full Error:
& "AzCopy\AzCopy.exe" /Source:"D:\a\1\s\TestResults\Coverage\Reports" /Dest:"https://project1.blob.core.windows.net/examplecontainer" /#:"D:\a\_temp\36c17ff3-27da-46a2-95d7-7f3a01eab368" /SetContentType:image/png /Pattern:"Example.png"
[2020/04/18 21:29:18][ERROR] D:\a\1\s\TestResults\Coverage\Reports\Example.png: No input is received when user needed to make a choice among several given options.
Overwrite https://project1.blob.core.windows.net/examplecontainer/Example.png with D:\a\1\s\TestResults\Coverage\Reports\Example.png? (Yes/No/All) [2020/04/18 21:29:18] Transfer summary:
-----------------
Total files transferred: 1
Transfer successfully: 0
Transfer skipped: 0
Transfer failed: 1
Elapsed time: 00.00:00:01
##[error]Upload to container: 'examplecontainer' in storage account: 'project1' with blob prefix: '' failed with error: 'AzCopy.exe exited with non-zero exit code while uploading files to blob storage.' For more info please refer to https://aka.ms/azurefilecopyreadme

Not so much as solution, as a workaround but I set this to version 1 of the task and it worked for me!

Related

Azure Datafactory Pipeline Failed inside a scheduled trigger

I have created 2 pipeline in Azure Datafactory. We have a custom activity created to run a python script inside the pipeline.When the pipeline is executed manually it successfully run for n number of time.But i have created a scheduled trigger of an interval of 15 minutes in order to run the 2 pipelines.The first execution successfully runs but in the next interval i am getting the error "Operation on target PyScript failed: Hit unexpected exception and execution failed." we are blocked wiht this.any input on this would be really helpful.
from ADF troubleshooting guide, it states...
Custom Activity :
The following table applies to Azure Batch.
Error code: 2500
Message: Hit unexpected exception and execution failed.
Cause: Can't launch command, or the program returned an error code.
Recommendation: Ensure that the executable file exists. If the program started, make sure stdout.txt and stderr.txt were uploaded to the storage account. It's a good practice to emit copious logs in your code for debugging.
Related helpful doc: Tutorial: Run Python scripts through Azure Data Factory using Azure Batch
Hope this helps.
If you are still blocked, please share failed pipeline run ID & failed activity run ID, for further analysis.

Azure function upload failing with "A task was canceled"

I am getting the following error when using the following command to upload my function app:
func azure functionapp publish FuncAppName
I ran this from both the parent directory of the function app and the function app directory itself, and got the same error. It looks like some task in the upload times out after a minute or so:
Publish C:\Users\username\Documents\visual studio 2017\Projects\AzureFuncApp contents to an Azure Function App. Locally deleted files are not removed from destination.
Getting site publishing info...
Creating archive for current directory...
Uploading archive...
A task was canceled.
Any idea how to solve this/get more debugging info?
The function in question already exists on Portal and is running. I was previously able to upload it successfully.
Please refer to this GitHub issue:
https://github.com/Azure/azure-functions-cli/issues/147
A change has been made to address this issue and will be included in the next CLI release.

AzCopy blob download throwing errors on local machine

I am running the following command while learning how to use AzCopy.
azcopy /Source:https://storeaccountname.blob.core.windows.net/container /Dest:C:\container\ /SourceKey:Key /Pattern:"tdx" /S /V
Some files are downloaded by most files result in an error like the following. I have no idea why this happening and wondered if somebody has encountered this and knows the cause and the fix.
[2016/05/31 21:27:13][ERROR] tdx/logs/site-visit/archive/1463557944558/visit-1463557420000: Failed to open file C:\container\tdx\logs\site-visit\archive\1463557944558\visit-1463557420000: Access to the path 'C:\container\tdx\logs\site-visit\archive\1463557944558\visit-1463557420000' is denied..
My ultimate goal was to create backups of the blobs in a container of one storage account to the container of another storage account. So I am starting out with basics which seem to fail.
Here is a list of folder names from an example path pulled from Azure Portal:
storeaccountname > Blob service > container > app-logs > hdfs > logs
application_1461803569410_0008
application_1461803569410_0009
application_1461803569410_0010
application_1461803569410_0011
application_1461803569410_0025
application_1461803569410_0027
application_1461803569410_0029
application_1461803569410_0031
application_1461803569410_0033
application_1461803569410_0035
application_1461803569410_0037
application_1461803569410_0039
application_1461803569410_0041
application_1461803569410_0043
application_1461803569410_0045
There is an error in the log for each one of these folders that looks like this:
[2016/05/31 21:29:18.830-05:00][VERBOSE] Transfer FAILED: app-logs/hdfs/logs/application_1461803569410_0008 => app-logs\hdfs\logs\application_1461803569410_0008.
[2016/05/31 21:29:18.834-05:00][ERROR] app-logs/hdfs/logs/application_1461803569410_0008: Failed to open file C:\container\app-logs\hdfs\logs\application_1461803569410_0008: Access to the path 'C:\container\app-logs\hdfs\logs\application_1461803569410_0008' is denied..
The folder application_1461803569410_0008 contains two files. Those two files were successfully downloaded. From the logs:
[2016/05/31 21:29:19.041-05:00][VERBOSE] Finished transfer: app-logs/hdfs/logs/application_1461803569410_0008/10.2.0.5_30050 => app-logs\hdfs\logs\application_1461803569410_0008\10.2.0.5_30050
[2016/05/31 21:29:19.084-05:00][VERBOSE] Finished transfer: app-logs/hdfs/logs/application_1461803569410_0008/10.2.0.4_30050 => app-logs\hdfs\logs\application_1461803569410_0008\10.2.0.4_30050
So it appears that the problem is related to copying folders, which themselves are blobs but I can't be certain yet.
There are several known issues when using AzCopy, such as the below which will cause error,
If there are two blobs named “a” and “a/b” under a storage container, copying the blobs under that container with /S will fail. Windows will not allow the creation of folder name “a” and file name “a” under the same folder.
Refer to https://blogs.msdn.microsoft.com/windowsazurestorage/2012/12/03/azcopy-uploadingdownloading-files-for-windows-azure-blobs/. Scroll down to the bottom, see details of Known Issues.
In my container con2, there are a folder named abc.pdf and also a file abc.pdf, when executing Azcopy download command with /S, it will prompt a error message.
Please check your container whether there are folders with the same name as a file.

How do I get the correct path to a folder of an Azure container?

I'm trying to read files from an Azure storage account. In particular, I'd like to read all files contained in a certain folder, for example:
lines = sc.textFile('/path_to_azure_folder/*')
I am not quite sure what the path should be. I tried with the URL service blob endpoint, from Azure, followed by the folder path (I tried with both http and https):
lines = sc.textFile('https://container_name.blob.core.windows.net/path_to_folder/*')
and did not work:
diagnostics: Application XXXXXX failed 5 times due to AM Container for
XXXXXXXX exited with exitCode: 1 Diagnostics: Exception from
container-launch. Container id: XXXXXXXXX Exit code: 1
the URL I provided is the same I'm getting with CyberDuck App, when I click on 'Info'.
Your path should look like this
lines = sc.textFile("wasb://containerName#$storageAccountName.blob.core.windows.net/folder_path/*")
This should solve your your issue.
If you are trying to read all the blobs in an Azure Storage account, you might want to look into the tools and libraries we offer for retrieving and manipulating your data. Getting started doc here.
Hope this is helpful!

MsDeploy remoting executing manifest twice

I have:
Created a manifest for msdeploy to:
Stop, Uninstall, Copy over, Install, and Start a Windows service.
Created a package from the manifest
Executed msdeploy against the package against a remote server.
Problem: It executes the entire manifest twice.
Tried: I have tinkered with the waitInterval and waitAttempts thinking it was timing out and starting over, but that hasn't helper.
Question: What might be making it execute twice?
The Manifest:
<sitemanifest>
<runCommand path="net stop TestSvc"
waitInterval="240000"
waitAttempts="1"/>
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<dirPath path="C:\msdeploy\TestSvc\TestSvc\bin\Debug" />
<runCommand
path="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe"
waitInterval="240000"
waitAttempts="1"/>
<runCommand path="net start TestSvc"
waitInterval="240000"
waitAttempts="1"/>
</sitemanifest>
The command issued to package it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:manifest=c:\msdeploy\custom.xml
-dest:package=c:\msdeploy\package.zip
The command issued to execute it:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-source:package=c:\msdeploy\package.zip
-dest:auto,computername=<computerNameHere>
I am running as a domain user who has administrative access on the box. I have also tried passing credentials - it is not a permissions issue, the commands are succeeding, just executing twice.
Edit:
I enabled -verbose and found some interesting lines in the log:
Verbose: Performing synchronization pass #1.
...
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.exe) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
Verbose: Source filePath (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) does not match destination (C:\msdeploy\MyTestWindowsService\MyTestWindowsService\bin\Debug\MyTestWindowsService.pdb) differing in attributes (lastWriteTime['11/08/2011 23:40:30','11/08/2011 23:39:52']). Update pending.
After these lines, files aren't copied the first time, but are copied the second time
...
Verbose: The dependency check 'DependencyCheckInUse' found no issues.
Verbose: Received response from agent (HTTP status 'OK').
Verbose: The current synchronization pass is missing stream content for 2 objects.
Verbose: Performing synchronization pass #2.
...
High Level
Normally I deploy a freshly built package with newer bits than are on the server.
During pass two, it duplicates everything that was done in pass one.
In pass 1, it will:
Stop, Uninstall, (delete some log files created by the service install), Install, and Start a Windows service
In pass 2, it will:
Stop, Uninstall, Copy files over, Install, and Start a Windows service.
I have no idea why it doesn't copy over the files in pass 1, or why pass 2 is triggered.
If I redeploy the same package instead of deploying fresh bits, it will run all the steps in pass 1, and not run pass 2. Probably because the files have the same time stamp.
There is not enough information in the question to really reproduce the problem to give a specific answer... but there are several things to check/change/try to make this work:
runCommand needs specific privileges
waitInterval="240000" and waitAttempt="1" (double quotes instead of single quotes)
permissions for the deployment service / deployment agent regarding directories etc. on the target machine
use tempAgent feature
work through the troubleshooting section esp. the logs and try the -whatif and -verbose options
EDIT - after the addition of -verboseoutput:
I see these possibilities:
Time
Both machines have a difference in time (either one of them is just a bit off or some timezone issue...)
Filesystem
If one of the filesystems is FAT this could lead to problems (timestamp resolution...)
EDIT 2 - as per comments:
In my last EDIT I wrote about timestamp because my suspicion is that something goes wrong when these are compared... that can be for example differring clocks between both machines (even a difference of 30 sec can have an impact) and/or some timezone issues...
I wrote about filesystem esp. FAT since the timestamp resolution of FAT is someabout 2 seconds while NTFS has much higher resolution, again this could have an impact when comparing timestamps...
From what you describe I would suggest the following workarounds:
use preSync and postSync for the Service handling parts (i.e. preSync for stop + uninstall and postSync for install + start) and do only the pure sync in the manifest or commandline
OR
use a script for the runCommand parts
EDIT 3 - as per comment from Merlyn Morgan-Graham the result for future reference:
When using the runCommand provider, use batch files. For some reason this made it stop running two passes.
The problem with this solution is that one can't specify the installation directory of the service via a SetParameters.xml file (same for dontUseCommandExe / preSync / postSync regarding SetParameters.xml).
EDIT 4 - as per comment from Merlyn Morgan-Graham:
The timeout params apply to whether to kill that specific command, not to the closing of the Windows Service itself... in this case it seems that the Windows Service takes rather long to stop and thus only the runCommands get executed without the copy/sync and a new try for the whole run is initiated...
I had the same problem, but I don't make package.zip file.
I perform synchronization directly in one step.
The preSync/postSync solution helped me a lot and there is no need to use manifest files.
You can try the following command in your case:
"C:\Program Files\IIS\Microsoft Web Deploy V2\msdeploy"
-verb:sync
-preSync:runCommand="net stop TestSv && C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe /u
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe",waitInterval=240000,waitAttempts=1
-source:dirPath="C:\msdeploy\TestSvc\TestSvc\bin\Debug"
-dest:auto,computername=<computerNameHere>
-postSync:runCommand="C:\Windows\Microsoft.NET\Framework\v4.0.30319\installutil.exe
C:\msdeploy\TestSvc\TestSvc\bin\Debug\TestSvc.exe && net start TestSvc",waitInterval=240000,waitAttempts=1
"-verb:sync" parameter means you synchronize data between a source and a destination. In your case your case, first time you perform synchronization between the "C:\msdeploy\TestSvc\TestSvc\bin\Debug" folder and the "package.zip". Plus, you are using manifest file, so when you perform second synchronization between the "package.zip" and the destination "computername", msbuild uses previously provided manifest twice for the destination and for the source, so each manifest operation runs twice.
I used the && trick to perform several commands in one command line.
Also, in my case, I had to add timeout operation to be sure the service were completely stopped ("ping -n 30 127.0.0.1 > nul").

Resources