In my work, I use 2 Linux servers.
The first one is used for web-crawling and create it as a text file.
The other one is used for analyzing the text file from the web crawler.
So the issue is that when a text file is created on web-crawling server,
it needs to be transferred automatically on the analysis server.
I've used shell programming guides referring some tips,
and set up the crawling server to be able to execute the scp command without requiring the password (By using ssh-keygen command, Add ssh-key on authorized_keys file located in /root/.ssh directory)
But I cannot figure out how to programmatically transfer the file when it is created.
My job position is just data analyze (Not programming)
So, the lack of background programming knowledge is my big concern
If there is a way to trigger the scp to copy the file when it is created, please let me know.
You could use inotifywait to monitor the directory and run a command every time a file is created in the directory. In this case, you would fire off the scp command. IF you have it set up to not prompt for the password, you should be all set.
inotifywait -mrq -e CREATE --format %w%f /path/to/dir | while read FILE; do scp "$FILE"analysis_server:/path/on/anaylsis/server/; done
You can find out more about inotifywait at http://techarena51.com/index.php/inotify-tools-example/
Related
Can any one help me writing a shell script to Download files from Linux/UNIX system?
Regards
On UNIX systems, such as Linux and OSX, you have access to a utility called rsync. It is installed by default and is the tool to use to download files from another UNIX system.
It is a drop-in replacement for the cp (copy) command, but it is much more powerful.
To copy a directory from a remote system to yours, using SSH, you would do this:
rsync username#hostname:path/to/dir .
(notice the dot at the end, this means 'place everything here please', you can also give the name of the local dir where the files should be placed.)
To download only some specific files, use this:
rsync 'username#hostname:path/to/dir/*.txt' .
(notice the quotes: if you omit them, your shell will try to expand the *.txt part locally, will fail and give you an error.)
Useful flags:
--progress: show a progress bar
--append: if a file has only partially downloaded, resume it where it left off
I find the rsync utility so useful, I've created an alias for it in my shell and use it as a 'super-copy':
alias cpa 'rsync -vae ssh --progress --append'
With that alias, copying files between machines is just as easy as copying files locally:
cpa user#host:file .
Making it even better
Since rsync is using SSH, it helps to setup a private/public key pair, so you don't have to type in your password every time:
How do I setup Public-Key Authentication?
Futhermore, you can write down your username in your .ssh/config file and give the remote host a short name: read about it here.
For example, I have something like this:
Host panda
Hostname panda.server.long.hostname.com
User rodin
With this setup, my command to download files from the panda server is just:
cpa panda:path/to/my/files .
And there was much rejoicing.
On a computer with IP address like 10.11.12.123, I have a folder document. I want to copy that folder to my local folder /home/my-pc/doc/ using the shell.
I tried like this:
scp -r smb:10.11.12.123/other-pc/document /home/my-pc/doc/
but it's not working.
So you can use below command to copy your files.
scp -r <source> <destination>
(-r: Recursively copy entire directories)
eg:
scp -r user#10.11.12.123:/other-pc/document /home/my-pc/doc
To identify the location you can use the pwd command, eg:
kasun#kasunr:~/Downloads$ pwd
/home/kasun/Downloads
If you want to copy from B to A if you are logged into B: then
scp /source username#a:/destination
If you want to copy from B to A if you are logged into A: then
scp username#b:/source /destination
In addition to the comment, when you look at your host-to-host copy options on Linux today, rsync is by far, the most robust solution around. It is brought to you by the SAMBA team[1] and continues to enjoy active development. Most distributions include the rsync package by default. (if not, you should find an easily installable package for your distro or you can download it from rsync.samba.org ).
The basic use of rsync for host-to-host directory copy is:
$ rsync -uav srchost:/path/to/dir /path/to/dest
-uav simply recursively copies -ua only new or changed files preserving file & directory times and permissions while providing -v verbose output. You will be prompted for the username/password on 10.11.12.123 unless your have setup ssh-keys to allow public/private key authentication (see: ssh-keygen for key generation)
If you notice, the syntax is basically the same as that for scp with a slight difference in the options: (e.g. scp -rv srchost:/path/to/dir /path/to/dest). rsync will use ssh for secure transport by default, so you will want to insure sshd is running on your srchost (10.11.12.123 in your case). If you have name resolution working (or a simple entry in /etc/hosts for 10.11.12.123) you can use the hostname for the remote host instead of the remote IP. Regardless, you can always transfer the files you are interested in with:
$ rsync -uav 10.11.12.123:/other-pc/document /home/my-pc/doc/
Note: do NOT include a trailing / after document if you want to copy the directory itself. If you do include a trailing / after document (i.e. 10.11.12.123:/other-pc/document/) you are telling rsync to copy the contents, (i.e. the files and directories under) document to 10.11.12.123:/other-pc/ without also copying the document directory.
The reason rsync is far superior to other copy apps is it provides options to truly synchronize filesystems and directory trees both locally and between your local machine and remote host. Meaning, in your case, if you have used rsync to transfer files to /home/my-pc/doc/ and then make changes to the files or delete files on 10.11.12.123, you can simply call rsync again and have the changes/deletions reflected in /home/my-pc/doc/. (look at the several flavors of the --delete option for details in rsync --help or in man 1 rsync)
For these, and many more reasons, it is well worth the time to familiarize yourself with rsync. It is an invaluable tool in any Linux user's hip pocket. Hopefully this will solve your problem and get you started.
Footnotes
[1] the same folks that "Opened Windows to a Wider World" allowing seemless connection between windows/Linux hosts via the native windows server message block (smb) protocol. samba.org
If the two directories (document and /home/my-pc/doc/) you mentioned are on the same 10.11.12.123 machine.
then:
cp -ai document /home/my-pc/doc/
else:
scp -r document/ root#10.11.12.123:/home/my-pc/doc/
I have one WHM panel data in /backup folder, I want to copy all data On remote server. Please provide me the script, how to use rsync command In script to sync data on a remote server and get the mail when the copy is Completed?
Thanks
Here is the rsync shell script that I use. It is placed in a folder named /publish in my project.
The gist contains the rs_exclude.txt file the shell script mentions.
You might also be interested in using ssh keys so you don't have to type your password each time
# reverse the comments on the next two lines to do a dry run
#dryrun=--dry-run
dryrun=
c=--compress
exclude=--exclude-from=rs_exclude.txt
pg="--no-p --no-g"
#delete is dangerous - use caution. I deleted 15 years worth of digital photos using rsync with delete turned on.
# reverse the comments on the next two lines to enable deleting
#delete=--delete
delete=
rsync_options=-Pav
rsync_local_path=../
rsync_server_string=user#example.com
rsync_server_path="/home/www.example.com"
# choose one.
#rsync $rsync_options $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
#how to specify an alternate port
#rsync -e "ssh -p 2220" $dryrun $delete $exclude $c $pg $rsync_local_path $rsync_server_string:$rsync_server_path
https://gist.github.com/treehousetim/2a7871f87fa53007f17e
Can we ftp specific files from a directory. And these specific files that needs to be transferred will be specified in config file.
Can we use a for loop once logged into ftp (in a script) for this purpose.
Will a normal ftp work when transferring files from Unix to win ftp server.
Thanks,
Ravi
You can use straight shell. This assumes your login directory is /home/ravi
Try this one time only:
echo "machine serverB user ravi password ravipasswd" > /home/ravi/.netrc
chmod 600 /home/ravi/.netrc
test that .netrc works - ftp serverB should log you straight in.
Shell script that reads config.file, which is just a list of files to send
while read fname
do
ftp serverB <<EOF
get $fname
bye
EOF # leave the EOF in column #1 of the script file
done < config.file
This gets file from serverB. Change get $fname to put $fname to send files from serverA to serverB
That certainly is possible. You can transfeer files listed in some file by implementing a script using an ftp client (buildin or via calling a cli client). The protocol is system independant, therefore it is possible to transfer files between systems running different operating systems. There is only one catch: remember that MS-Windows uses a case insensitive file system, other systems differ in that.
I am currently learning LINUX commands and I am wondering how to run commands within the .plan file.
For example i want to a message as would be output from the ~stepp/cosway programs.
I typed ~stepp/cosway "HELLO" but it didn't work. What is the command for that?
Also how do I set all files in the current directory and all its subdirectories recursively to have a group of admin?
The .plan file is a plain text file that is served by the fingerd daemon. For security reasons, it's not possible to execute commands from that file, unless you modify and recompile fingerd on your machine to do so.
Concerning the second part of your question, use chgrp:
$ chgrp -R admin *