I have a generic script that uses wget to download the file (passed as parameter to the script) from FTP server. The script always downloads the files into the same local folder. The problem I am running into is that .listing file created by wget gets deleted by default so if the script is called in parallel for different files, whichever process gets to delete the .listing file succeeds and the rest fail.
So I tried to use --no-remove-listing along with wget command, but then I get the error:
File ".listing" already there; not retrieving.
I looked at another post but as mentioned in the comments by original poster, the question hasn't been answered even though it is marked so.
One option I was thinking about is to change the script to create subdirectory with filename and download the file there. But since it is a large script, I was trying to see if there is an easier option to just change wget command.
Related
I need to extract an attachment that I receive every day via email, on a linux server.
I'm using ripMIME for this task and have a script like this:
theFile=$(ls -t * | head -n 1)
ripmime -i $theFile -d /home/myDirectory/
First line assigns the name of the newest file (email) to the variable "theFile"
Second line should extract it's attachments to the /home/myDirectory/ path, however it doesn't extracts anything.
However, if I execute this line: (including the file name instead of the variable)
ripmime -i 1536138112.M623890P26484.myDomain.com,S\=1345977,W\=1363482:2,S -d /home/myDirectory/
...then the files are successfully extracted and copied to the specified directory.
I need to use a variable since I can't possibly know the name of the file, I just need to extract the files from the newest email using a script.
Also, I don't get any output when the instruction fails, so I'm in the dark here.
The ripMIME tool documentation can be found here
Any help will be appreciated.
When I included those lines inside a script file (.sh) and executed it, then everything worked like a charm. That didn't happen when I was trying to execute it directly from command line.
I used command zip in linux (RedHat), this is my command:
zip -r /home/username/folder/compress/zip.zip /home/username/folder/compressed/*
Then, i open file zip.zip, i see architecture as path folder compress.
I want to in folder zip only consist list file *.txt
Because i used this command in script crontab hence i can't use command cd to path folder before run command zip
Please help me
I skimmed the zip man page and this is what I have found. There is not an option archive files relative to a different directory. The closest I have found is zip -j which removes the entire path and stores the files directly in the zip rather than sub directories. I do not know what happens in the case of file name conflicts such as if /home/username/folder/compressed/a.txt and /home/username/folder/compressed/subdir/a.txt both exist. If this is not a problem for you, you can use this option, but I am concerned because you did specify the -r option indicating that you expect zip to traverse sub folders.
I also thought of the possibility that your script could somehow call zip with a different working directory, but I took a look at this unix stack exchange page and it looks like their options use cd.
I have to admit I do not understand why you cannot use cd and I am very curious about it. You said something about using crontab, but I have never heard of anything wrong with changing directories in a crontab script.
I used option -j in command zip
zip -jr /home/username/folder/compress/zip.zip /home/username/folder/compressed/*
and i was yet settled this problem, thanks
I'm currently using wget to download specific files from a remote server. The files are updated every week, but always have the same file names. e.g new upload file1.jpg will replace local file1.jpg
This is how I am grabbing them, nothing fancy :
wget -N -P /path/to/local/folder/ http://xx.xxx.xxx.xxx/remote/files/file1.jpg
This downloads file1.jpg from the remote server if it is newer than the local version then overwrites the local one with the new one.
Trouble is, I'm doing this for over 100 files every week and have set up cron jobs to fire the 100 different download scripts at specific times.
Is there a way I can use a wildcard for the file name and have just one script that fires every 5 minutes for example?
Something like....
wget -N -P /path/to/local/folder/ http://xx.xxx.xxx.xxx/remote/files/*.jpg
Will that work? Will it check the local folder for all current file names, see what is new and then download and overwrite only the new ones? Also, is there any danger of it downloading partially uploaded files on the remote server?
I know that some kind of file sync script between servers would be a better option but they all look pretty complicated to set up.
Many thanks!
You can specify the files to be downloaded one by one in a text file, and then pass that file name using option -i or --input-file.
e.g. contents of list.txt:
http://xx.xxx.xxx.xxx/remote/files/file1.jpg
http://xx.xxx.xxx.xxx/remote/files/file2.jpg
http://xx.xxx.xxx.xxx/remote/files/file3.jpg
....
then
wget .... --input-file list.txt
Alternatively, If all your *.jpg files are linked from a particular HTML page, you can use recursive downloading, i.e. let wget follow links on your page to all linked resources. You might need to limit the "recursion level" and file types in order to prevent downloading too much. See wget --help for more info.
wget .... --recursive --level=1 --accept=jpg --no-parent http://.../your-index-page.html
I have an automatic backup of a file running on a cronjob. It outputs into a folder, let's call /backup, and appends a timestamp to each file, every hour, like so:
file_08_07_2013_01_00_00.txt, file_08_07_2013_02_00_00.txt, etc.
I want to download these to another server, to keep as a separate backup. I normally just use wget and download a specific file, but was wondering how I could automate this, ideally every hour it would download the most recent file.
What would I need to look into to set this up?
Thanks!
wget can handle that, just enable time-stamping. I'm not even going to attempt my own explanation, here's a direct quote from the manual:
The usage of time-stamping is simple. Say you would like to download a
file so that it keeps its date of modification.
wget -S http://www.gnu.ai.mit.edu/
A simple ls -l shows that the time stamp on the local file equals the state of the Last-Modified
header, as returned by the server. As you can see, the time-stamping
info is preserved locally, even without ā-Nā (at least for http).
Several days later, you would like Wget to check if the remote file
has changed, and download it if it has.
wget -N http://www.gnu.ai.mit.edu/
Wget will ask the server for the last-modified date. If the local file has the same timestamp as
the server, or a newer one, the remote file will not be re-fetched.
However, if the remote file is more recent, Wget will proceed to fetch
it.
In my workplace, there's one Perl script that runs on a Unix machine every time someone tries to check-in a file to the SVN repo for any of the 10-20 projects.
The way it works is that each project has its own "Hooks" folder with a file called "pre-commit" which SVN automatically executes when someone check-in something. Except the "pre-commit" file is actually a symbolic link to the one central Perl script common to all projects just so that if a change needs to be made to the Perl script it doesn't need to be done for every project.
So my problem is this: I need to put a text file in each of these projects' "hooks" directory, each one containing some settings specific to that project. So there will be 10-20 settings files (one per project) each in their respective "hooks" directory.
The problem is that I need to open these text files in the Perl script and read from them but I'm having issues letting Perl know where to find it. I tried using the $0 parameter which is supposed to tell me where the script is being executed from but because it's a symbolic link it just says "Not a directory" and the script terminates. I need to get the path of the "hooks" directory so that I can find the text file.
The SVN pre-commit script is supposed to be invoked with the path to the repository as its first argument. Inside a Perl script, that argument should be available as $ARGV[0]. You should be able to build the path to the corresponding hooks directory or to a file inside that directory by simply appending to the repository path, like this:
$repopath = $ARGV[0];
$hookspath = $repopath . "/hooks";
$myfilepath = $hookspath . "/myfile";
although for maximum portability it would be cleaner to use the pathname-manipulation functions in the File::Spec module to do this.
If this approach doesn't work then you'll have to explain more about how your Perl script gets invoked. For instance, if your pre-commit script is really a shell script wrapper that eventually invokes perl then perhaps it's not passing the pre-commit arguments along properly.
Showing us your current code that's failing would be a good thing too.