I'm running a batch file that uses a .ini file to populate the scp2 command (F-Secure's scp2). The batch file when triggered will complete a scp2 of data files from a remote Linux server to the local Windows server.
INI FILE:
REMOTE_FILE="*"
BATCH FILE:
"%SSH_HOME%\scp2" -k %KEYS% -o "AllowedAuthentications publickey" -o "StrictHostKeyChecking off" %USER%#%SERVER%:%REMOTE_DIR%/%REMOTE_FILE%.* %LOCAL_DIR% >> %LOG% 2>&1
When %REMOTE_FILE% was set to "x", this would happily collect all files x.*
However, since changing %REMOTE_FILE% to "*", the scp2 now tries to copy the sub directories on the remote server, which fails as I am not using -r, but also causes a non-zero error code of scp2 which affects subsequent processing in the batch file.
I am assuming that the operating system (not sure which one) is expanding the file mask but I cannot identify how to stop this behaviour and let scp2 expand the file mask. I have tried setting the variable to "*", as well as putting the whole quotes around the whole user/passwd/directory/file , i.e
"%USER%#%SERVER%:%REMOTE_DIR%/%REMOTE_FILE%.*"
but with no success. Any ideas out there, please?
If your intention is to copy just the files, since you are not bothering with the '-r' argument, then perhaps a change of the mask from "" to ".*" might get you what you want?
Related
I want to empty (not delete) log files daily at a particular time. something like
echo "" > /home/user/dir/log/*.log
but it returns
-bash: /home/user/dir/log/*.log: ambiguous redirect
is there any way to achieve this?
You can't redirect to more than one file, but you can tee to multiple files.
tee /home/user/dir/log/*.log </dev/null
The redirect from /dev/null also avoids writing an empty line to the beginning of each file, which was another bug in your attempt. (Perhaps specify nullglob to avoid creating a file with the name *.log if the wildcard doesn't match any existing files, though.)
However, a much better solution is probably to use the utility logrotate which is installed out of the box on every Debian (and thus also Ubuntu, Mint, etc) installation. It runs nightly by default, and can be configured by dropping a file in its configuration directory. It lets you compress the previous version of a log file instead of just overwrite, and takes care to preserve ownership and permissions etc.
I am hoping that a more experienced set of eyes will find something obvious that I am missing or will be able to help me work around the errors that mv and rsync are producing. Up for the challenge?
Basic idea:
I have a bash script in which I am automating the move of files from one directory to another.
The problem:
When I run the script, periodically I get the following error from the mv command:
mv: cannot stat `/shares/directory with spaces/test file.txt': No such file or directory. The error code from the vm command is 1. Even more odd, the file move actually succeeds sometimes.
In addition, I have a branch of logic in the script that will alternately use rsync to move/copy specific files (from the same local file system source and destination as the mv command mentioned above). I get a similar error related to the stat() system call:
rsync: link_stat "/shares/directory with spaces/test file.txt" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
This error does not always manifest itself when the script is run. Sometimes it completes the file move without complaint, while other times it will return the error consistently when the script is run successive times.
There is one additional ingredient you should be aware of (and I am growing to suspect this as a key ingredient in my grief): the directory /shares/ is a directory that is being monitored by an installation of Dropbox -- meaning it is watched and mirrored by an installation of Dropbox. At this point, I am unable to determine if dropboxd is somehow locking the file, or the like, such that it cannot be stat-ed. To be clear, the files are eventually freed from this state without further intervention and are mv-able.
The code:
mv -v --no-clobber "${SOURCEPATH}${item}" "${DESTINATIONPATH}${item}"
More info:
The following might, or might not, be relevant:
mount indicates the filesystem is ext4
Presumably, ownership and permissions shouldn't be an issue as the script is being run by root. Especially if the file system is not fuse-based.
The base "directory" in the path (e.g. /shares/) is a symlink to another directory on the same file system.
The flavor of Linux is Debian.
Troubleshooting:
In trying to eliminate any issues with the variable expansion or their contents, I tried hardwiring the bash script like such:
mv -v --no-clobber "/shares/directory with spaces/test file.txt" "/new destination/directory with spaces/test file.txt" after verifying via ls -al that "test file.txt" existed. For reference the permissions were: -rw-r--r--
Unfortunately, this too results in the same error.
Other possible issues I could think of and what I have done to try to rule them out:
>> possible issue: slow HDD (or drive is in low power mode) or external USB drive
>> findings: The drives are all local SATA disks set to not park heads. In addition, even when forcing a consistent read from the file system, the same error happens
>> possible issue: non-Linux, NFS or fuse-based file system
>> findings: nope, source and destination are on the same local file system and mount indicates the file system is ext4
>> possible issue: white space or other unprintable chars in the file path
>> findings: verified that the source and destination paths where properly wrapped in quotes
>> possible issue: continuation issues after escaped newline (space after \ in wrapped command)
>> findings: made sure the command was all on one line, still the same error
>> possible issue: globbing (use of * in specifying the files to move)
>> findings: nope, each file is specified directly by path and name
>> possible issue: path confusion from the use of local path
>> findings: nope, file paths are fully qualified starting from /
>> possible issue: the files are not actually in the path specified
>> findings: nope, verified the file existed right prior to executing the script via ls -al
>> possible issue: somehow the --no-clobber of mv was causing issues
>> findings: nope, tried it without, same error
>> possible issue: only files created via Dropbox sync to the file system are problematic
>> findings: nope, created a local file directly via touch new-local-file.txt and it too produced the same stat() error
My analysis:
The fact that mv and rsync produce similar stat() errors leads me to believe:
there is some systemic underlying boundary case (e.g. file permissions/ownership or file busy) that is not accounted for in the bash script; or
the same bug is plaguing me in both the mv and the rsync scenarios.
Desired outcomes:
1. The root cause of the intermittent errors can be identified.
2. The root cause can be resolved or worked around.
3. The bash script can be improved to gracefully handle when the error occurs.
So, with a lot more troubleshooting I found an errant rsync statement some 200 lines earlier in the script that was conditionally executed (thus the seeming inconsistent behavior). That rsync --archive ... statement was being passed /shares/ as its source directory, therefore it effected the /shares/directory with spaces/ subdirectory. That subdirectory was the ${SOURCEPATH} of the troubling mv command mentioned in my post above.
Ultimately, it was a missing --dry-run flag on the rsync --archive ... statement that causing the trampling of the files that the script later expected to pass to mv to process.
Thanks for all who took the time to read my post. Though I am bummed to have spent my and your time on what turned out to be a bug in my script, it is reassuring to know that:
- computers are not irrational
- I am not insane
- there is not some nefarious, deep rooted bug in the linux file system
For those that stumble upon this post in the future because you are experiencing an error of cannot stat, please read my troubleshooting notes above. Much research went into that list. One of those might be your issue. If not, keep debugging, there is an explanation. Good luck!
In my work, I use 2 Linux servers.
The first one is used for web-crawling and create it as a text file.
The other one is used for analyzing the text file from the web crawler.
So the issue is that when a text file is created on web-crawling server,
it needs to be transferred automatically on the analysis server.
I've used shell programming guides referring some tips,
and set up the crawling server to be able to execute the scp command without requiring the password (By using ssh-keygen command, Add ssh-key on authorized_keys file located in /root/.ssh directory)
But I cannot figure out how to programmatically transfer the file when it is created.
My job position is just data analyze (Not programming)
So, the lack of background programming knowledge is my big concern
If there is a way to trigger the scp to copy the file when it is created, please let me know.
You could use inotifywait to monitor the directory and run a command every time a file is created in the directory. In this case, you would fire off the scp command. IF you have it set up to not prompt for the password, you should be all set.
inotifywait -mrq -e CREATE --format %w%f /path/to/dir | while read FILE; do scp "$FILE"analysis_server:/path/on/anaylsis/server/; done
You can find out more about inotifywait at http://techarena51.com/index.php/inotify-tools-example/
I wanted to write a script that triggers some code when a file gets changed (meaning the content changes or the file gets overwritten by file with the same name) in a specific directory (or in a subdirectory). When running my code and changing a file it seems to run it twice everytime since I get the echo output twice. Is there something I am missing?
while true; do
change=$(inotifywait -e close_write /home/bla)
change=${change#/home/bla/ * }
echo "$change"
done
Also it doesn't do anything when I change something in a subdirectory of the specified directory.
The outpoot looks like this after i change a file in the specified directory:
Setting up watches.
Watches established.
filename
Setting up watches.
Watches established.
filename
Setting up watches.
Watches established.
I can't reproduce that the script outputs a message twice. Are you sure you don't run it twice (in the background)? Or are you using an editor to change the file? Some editors place a backup file beside the edited file while the file is open. This would explain that you see two messages.
For recursive directory watching you need to pass the option -r to inotifywait. However, you should not run that on a super larger filesystem tree since the number of inotify watches is limited. You can obtain the current limit on your system through
cat /proc/sys/fs/inotify/max_user_watches
I am using rsync to backup a million images from my linux server to my computer (windows 7 using Cygwin).
The command I am using now is :
rsync -rt --quiet --rsh='ssh -p2200' root#X.X.X.X:/home/XXX/public_html/XXX /cygdrive/images
Whenever the process is interrupted, and I start it again, it takes long time to start the copying process.
I think it is checking each file if there is any update.
The images on my server won't change once they are created.
So, is there any faster way to run the command so that it may copy files if local file doesn't exist without checking filesize, time, checksum etc...
Please suggest.
Thank you
did you try this flag -- it might help, but it might still take some time to resume the transfer:
--ignore-existing
This tells rsync to skip updating files that already exist on the destination (this does not ignore
existing directories, or nothing would get done). See also --existing.
This option is a transfer rule, not an exclude, so it doesn't affect the data that goes into the
file-lists, and thus it doesn't affect deletions. It just limits the files that the receiver requests
to be transferred.
This option can be useful for those doing backups using the --link-dest option when they need to con-
tinue a backup run that got interrupted. Since a --link-dest run is copied into a new directory hier-
archy (when it is used properly), using --ignore existing will ensure that the already-handled files
don't get tweaked (which avoids a change in permissions on the hard-linked files). This does mean that
this option is only looking at the existing files in the destination hierarchy itself.