I'm using AzCopy 8.1.0-netcore on Windows. The /V:[verbose-log-file] option can only append verbose log to a file. I'd like to output verbose to the Console directly. Is that possible?
Preferable way to save the log as a file, since there could be a lot of useful information.
if any transfer ever goes wrong. AzCopy resume a job, AzCopy will attempt to transfer all of the files that are listed in the plan file which weren't already transferred. One option would be to save the log file in the current directory, or you can change the location of log file using Azcopy env : https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure
~/.azcopy/plans contains the state files that allow AzCopy to resume failed jobs. They also allow the user to list all the jobs that ran in the past and query their results with ./azcopy jobs list and `./azcopy jobs show [job-ID]. We currently do not have a strategy to get rid of these files, as we don't know how long the user wants to keep records of their old jobs.
Logs are critical in helping our customers to investigate issues, as they could be very verbose and offer loads of useful information.
We can certainly add some kind of clean command that gets rid of these logs and plan files.
So as for workaround you can use azcopy jobs clean to remove the older logs and plan files.
Refer the document has same discussion did regarding the same:
https://github.com/Azure/azure-storage-azcopy/issues/221
Related
I am trying to use azcopy to copy from Google Cloud to Azure.
I'm following instructions here and I can see in the logs generated that the connectivity to GCP seems fine, the SAS token is fine and it creates the container fine (see it appear in Azure Storage Explorer) but then it just hangs. Output is:
INFO: Scanning...
INFO: Authenticating to source using GoogleAppCredentials
INFO: Any empty folders will not be processed, because source and/or destination doesn't have full folder support
If I look at the log it shows:
2022/06/01 07:43:25 AzcopyVersion 10.15.0
2022/06/01 07:43:25 OS-Environment windows
2022/06/01 07:43:25 OS-Architecture amd64
2022/06/01 07:43:25 Log times are in UTC. Local time is 1 Jun 2022 08:43:25
2022/06/01 07:43:25 ISO 8601 START TIME: to copy files that changed before or after this job started, use the parameter --include-before=2022-06-01T07:43:20Z or --include-after=2022-06-01T07:43:20Z
2022/06/01 07:43:25 Authenticating to source using GoogleAppCredentials
2022/06/01 07:43:26 Any empty folders will not be processed, because source and/or destination doesn't have full folder support
As I say, no errors around SAS token being out of date, or can't find the GCP credentials, or anything like that.
It just hangs.
It does this if I try and copy a single named file or a recursive directory copy. Anything.
Any ideas, please?
• I would suggest you to please check the logs of these AzCopy transactions for more details on this scenario. To collect the logs and analyze them, you will have to check the logs stored in ‘%USERPROFILE%\.azcopy’ directory on Windows. AzCopy creates log and plan files for every job, so you will have to investigate and troubleshoot any potential problems regarding this scenario by analyzing them.
• As you are encountering hang issues with the AzCopy utility during a job execution for transferring files, it might be a network fluctuation issue, timeout issue or server busy issues. Please do remember that AzCopy retries upto 20 times in these cases and usually the retry process succeeds. Try to look for the errors in the logs that are near ‘UPLOADFAILED, COPYFAILED, or DOWNLOADFAILED’.
• The following command will get all the errors with ‘UPLOADFAILED’ status from the concerned log file: -
Select-String UPLOADFAILED .\<CONCERNEDLOGFILE GUID>.log
To show the jobs by status relating to the job ID, kindly execute the below command: -
azcopy jobs show <job-id> --with-status=Failed
• Execute the AzCopy job execution command from your local system with ‘--no-check-certificate’ argument which will ensure that there are no certificate checks for the system certificates at the receiving end. Ensure that the root certificates for the network client device or software are correctly installed on your local system as they are the only ones to block your jobs while transferring files from on-premises to Azure.
Also, once the job starts initially without any parameters, then when it hangs, just press CTRL+C to kill the process and then immediately check the logs in AzCopy as well as in the event viewer for any system issues. It will help you know the exact issue regarding this. It really shows why the process failed and got hung.
For more information, kindly refer to the documentation link below: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-configure
https://github.com/Azure/azure-storage-azcopy/issues/517
Frustratingly, having many calls with Microsoft support, on demoing this to another person the exact same command with exact same SAS token etc that was previously failing just started to work.
I hate problems that 'fix themselves' as it means it will likely occur again.
Thanks to KartikBhiwapurkar-MT for a detailed response too.
How this can be achieve? I have a catalina.out log in a prod server which is growing fast in space. 6.7 GB in couple of days . I had the Idea at the begging to create a cronjob to be executed 2 or 3 days a week to run a script that copy catalina log to Azure blob storage and then wipe it out with just a command "echo "" > file". But moving 2 GB to azure every day that cron job executes don´t know if is the best idea either. way too big file.
Is there a way that the logs is in another server/azure storage? Where should I configuer that?
I read something about implementing log4j with tomcat, is this possible also? that catalina.out using log4j move it to other server? Howcan I achieve this?. I know that development team should check also why is growing and logging so fast this file, but in the meantime I need a solution to implement.
thanks!!
I read something about implementing log4j with tomcat, is this
possible also?
I think what you want to describe is Log Rotation, if you want to use this way, here is a blog about how to configure it.
I had the Idea at the begging to create a cronjob to be executed 2 or
3 days a week to run a script that copy catalina log to Azure blob
storage
Yes, you could choose this way to manage log, however I still have something to say. If you want to upload the log file to Azure Blob, I think you may get error for the large file . You need split large file into multiple small file. In this article, under the title Upload a file in blocks programmatically, there is detailed description.
From you description, you are not using Azure Web, so if you choose Azure Web , you could also use Azure Functions or WebJobs to do the cronjob.
If you still have other questions, please let me know.
I have some jobs in Jenkins that create logs that are 300MB large, each.
The build's log gets created on my Solaris M6 machine.
Fact : I cannot pimp my job because of a business process, it must stay as it is.
My question : How to maintain such huge logs?
Is there any way that Jenkins perhaps knows how to ZIP the logs by
himself and then UNZIP it when my user tries to read the Console
Output (the logs itself via Jenkins) ?
Because if I zip the log manually on Solaris, it will no longer be readable via Jenkins.
I found out that Hudson (Jenkins) has a support for .gz format, which means you can gzip all the logs and Jenkins will still be able to read them, which is SUPER AWESOME ! That way I saved tons of GB's of storage, it shrinked my 600MB logs to just 2MB, great.
I use CruiseControl.Net for continuous integration and I would like to read the log output of the current project in real time. For example, if it is running a compile command, I want to be able to see all the compile output so far. I can see where the log files are stored but it looks like they are only created once the project finishes. Is there any way to get the output in real time?
The CCTray app will allow you to see a snapshot of the last 5 or so lines of output of any command on a regular interval.
It's not a live update as that would be too resource intensive, as would be a full output of the log to-date.
Unless you write something to capture and store the snapshots you're out of luck. Doing this also presents to possibility of missing messages that appear between snapshots, so it would not be entirely reliable. It would however give you a slightly better idea of what is going on.
You can run ccnet.exe as a command line application instead of running ccservice as a Windows service. It will output to the terminal as it runs. It's useful for debugging.
I am trying to upload a 2.6 GB iso to Azure China Storage using AZCopy from my machine here in the USA. I shared the file with a colleague in China and they didn't have a problem. Here is the command which appears to work for about 30 minutes and then fails. I know there is a "Great Firewall of China" but I'm not sure how to get around the problem.
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy> .\AzCopy.exe
/Source:C:\DevTrees\MyProject\Layout-Copy\Binaries\Iso\Full
/Dest:https://xdiso.blob.core.chinacloudapi.cn/iso
/DestKey:<my-key-here>
The network between the azure server and your local machine should be very slow, and AzCopy use default 8*core threads to do data transfer which might be too aggressive for the slow network.
I would suggest you reduce the thread number by set parameter "/NC:", you can set it as a smaller number as "/NC:2" or "/NC:5", and see if the transfer will be more stable.
BTW, when the timeout issue repro again, please resume with same AzCopy command line, then you can always make progress with resume, instead of start from beginning.
Since you're experiencing a timeout, you could try AZCopy with in re-startable mode like this:
C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy> .\AzCopy.exe
/Source:<path-to-my-source-data>
/Dest:<path-to-my-storage>
/DestKey:<my-key-here>
/Z:<path-to-my-journal-file>
The path to your journal file is arbitrary. For instance, you could site it to C:\temp\azcopy.log if you'd like.
Assume an interrupt occurs while copying your file, and 90% of the file has been transferred to Azure already. Then upon restarting, we will only transfer the remaining 10% of the file.
For more information, type .\AzCopy.exe /?:Z to find the following info:
Specifies a journal file folder for resuming an operation. AzCopy
always supports resuming if an operation has been interrupted.
If this option is not specified, or it is specified without a folder path,
then AzCopy will create the journal file in the default location,
which is %LocalAppData%\Microsoft\Azure\AzCopy.
Each time you issue a command to AzCopy, it checks whether a journal
file exists in the default folder, or whether it exists in a folder
that you specified via this option. If the journal file does not exist
in either place, AzCopy treats the operation as new and generates a
new journal file.
If the journal file does exist, AzCopy will check whether the command
line that you input matches the command line in the journal file.
If the two command lines match, AzCopy resumes the incomplete
operation. If they do not match, you will be prompted to either
overwrite the journal file to start a new operation, or to cancel the
current operation.
The journal file is deleted upon successful completion of the
operation.
Note that resuming an operation from a journal file created by a
previous version of AzCopy is not supported.
You can also find out more here: http://blogs.msdn.com/b/windowsazurestorage/archive/2013/09/07/azcopy-transfer-data-with-re-startable-mode-and-sas-token.aspx