So I am looking for a way to upload a created Text File of sorts to a ftp
It needs to happen every lets say 4 hours
check ip
write ip to document or whatever
upload it to a FTP server with a spesific ipaddress, username and password
I am using Linux so a sh script will be fine
If you could explain what things go that'd be great
-(im still learning alot of stuff although being 5 years into use of Linux Mint and Fedora 21)
So far I have
dig +short myip.opendns.com #resolver1.opendns.com
This gets my public ip and next is to write it to a document and upload it to a ftp server which I do not know.
Just a final additional note, I'm looking for this to run every 4 hours by its self.
Quick and dirty first stab:
#!/bin/bash
# ftpmyip.sh
HOST=ftpserver
USER=userid
PASSWD=userpw
# write my ip address to file my_ip.txt
dig +short myip.opendns.com #resolver1.opendns.com > my_ip.txt
# ftp file to the ftp server
ftp -n $HOST <<SCRIPT
user $USER $PASSWD
binary
put my_ip.txt
quit
SCRIPT
Now, put it all in a cron job using crontab -e; line should say:
0 */4 * * * /home/enviousdata/ftpmyip.sh
Related
I'm trying to get a Ubuntu server to periodically (preferably whenever it gets updated, if possible) to copy a file remotely from an FTP server to a directory on the Ubuntu server. I should note I'm not very advanced with this kind of stuff.
I of course am not doing this without a tutorial, however it doesn't cover grabbing the file from an ftp.
What would be simplest for me is to be able to run:
tail -F ftp://ftp.addr.ess/files/file-i-want.txt | grep --line-buffered": <" | while read x ; do echo -ne $x | curl -X POST -d #- http://url/hook ; done
What I'm following has that FTP address as a local address. This is a problem, because that command returns this:
tail: cannot open 'ftp://ftp.addr.ess/files/file-i-want.txt' for reading: No such file or directory
I've tried to run:
rsync username#ftp.addr.ess:XX/files/file-i-want.txt /home/ubuntu/destination
however this returns:
ssh: connect to host ftp.addr.ess port XX: Connection refused.
So really if I can get rsync to run FTP instead of SSH, I figure I'd be golden. I researched it though and I can't figure out how to do this (keep in mind I'm no programmer). I originally thought the error was because I wasn't giving it a password, because I didn't know how. It might be that also, though.
This however brings me to my next issue. If it's possible to make rsync do FTP instead of SSH, how would I make it periodically do that?
What is being updated? The remote file (my guess) or something on your server? If it's the remote file, you're out of luck unless there is mecanism/process on the remote server that can send you a notification (an email for example).
I've not used ftp for ages, but have a look at this as a starting point.
A periodic task can be quite easily configured with a cron task.
I am an R user. I always run programs on multiple computers of campus. For example, I need to run 10 different programs. I need to open PuTTY 10 times to log into the 10 different computers. And submit each of programs to each of 10 computers (their OS is Linux). Is there a way to log in 10 different computers and send them command at same time? I use following command to submit program
nohup Rscript L_1_cc.R > L_1_sh.txt
nohup Rscript L_2_cc.R > L_2_sh.txt
nohup Rscript L_3_cc.R > L_3_sh.txt
First set up ssh so that you can login without entering a password (google for that if you don't know how). Then write a script to ssh to each remote host to run the command. Below is an example.
#!/bin/bash
host_list="host1 host2 host3 host4 host5 host6 host7 host8 host9 host10"
for h in $host_list
do
case $h in
host1)
ssh $h nohup Rscript L_1_cc.R > L_1_sh.txt
;;
host2)
ssh $h nohup Rscript L_2_cc.R > L_2_sh.txt
;;
esac
done
This is a very simplistic example. You can do much better than this (for example, you can put the ".R" and the ".txt" file names into a variable and use that rather than explicitly listing every option in the case).
Based on your topic tags I am assuming you are using ssh to log into the remote machines. Hopefully the machine you are using is *nix based so you can use the following script. If you are on Windows consider cygwin.
First, read this article to set up public key authentication on each remote target: http://www.cyberciti.biz/tips/ssh-public-key-based-authentication-how-to.html
This will prevent ssh from prompting you to input a password each time you log into every target. You can then script the command execution on each target with something like the following:
#!/bin/bash
#kill script if we throw an error code during execution
set -e
#define hosts
hosts=( 127.0.0.1 127.0.0.1 127.0.0.1)
#define associated user names for each host
users=( joe bob steve )
#counter to track iteration for correct user name
j=0
#iterate through each host and ssh into each with user#host combo
for i in ${hosts[*]}
do
#modify ssh command string as necessary to get your script to execute properly
#you could even add commands to transfer the file into which you seem to be dumping your results
ssh ${users[$j]}#$i 'nohup Rscript L_1_cc.R > L_1_sh.txt'
let "j=j+1"
done
#exit no error
exit 0
If you set up the public key authentication, you should just have to execute your script to make every remote host do their thing. You could even look into loading the users/hosts data from file to avoid having to hard code that information into the arrays.
My isp provides dynamic ip addresses.I have forwarded my port to an raspberry pi and accessing it through ssh and also using it as web server.but the problem is that ip changes every 3-4 days is there any way or script so that i can be informed or updated with new ip address.
Thank You.
You can write a script like:
============
#!/bin/bash
OUT=$(wget http://checkip.dyndns.org/ -O - -o /dev/null | cut -d: -f 2 | cut -d\< -f 1)
echo $OUT > /root/ipfile
============
Set a cron to execute this every 3h or something and configure your mta to send the file /root/ipfile to your email address ( that too you can use a cron ). mutt can be a useful tool to attach the file and do the email delivery.
Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path
I have a bash script that takes a list of IP Addresses, and pings them every 15 seconds to test connectivity. Some of these IP Addresses are servers and computers as to which I have the ability to control. I would like to be able to do something of the following:
Run The Bash File
It pings non-controlled IP Addresses
It will list the controlled Computers
When a computer turns off, it sends my script a response saying it turned off
The script outputs accordingly
I have the code all set up that pings these computers every 15 seconds and displays. What I wish to achieve is to NOT ping my controlled computers. They will send a command to the bash script. I know this can be done by writing a file and reading such file, but I would like a way that changes the display AS IT HAPPENS. Would mkfifo be an viable option?
Yes, mkfifo is ok for this task. For instance, this:
mkfifo ./commandlist
while read f < ./commandlist; do
# Actions here
echo $f
done
will wait until a new line can be read from FIFO commandlist, read it into $f and execute the body.
From the outside, write to the FIFO with:
echo 42 > ./commandlist
But, why not let the remote server call this script, perhaps via SSH or even CGI? You can setup a /notify-disconnect CGI script with no parameters and get the IP address of the peer from the REMOTE_ADDR environment variable.