I am trying to make a simple script in order to create automatic FTP backups reading domain, user and password from a CSV. I am using Wget command like this, and if i launch it from console, it works out of the box:
wget -r -P ./directory ftp://domain.com --ftp-user=myuser --ftp-password=mypassword
The problem occurs when i parameterize that command into a bash script to use it for many websites:
#!/bin/sh
#Read CSV.
while IFS=, read dominio usuario contrasenya
do
echo "Realizando Backup de: $dominio $usuario $contrasenya"
#Crear el backup del sitio web.
wget -r -P ./ ftp://"$dominio" --ftp-user="$usuario" --ftp-password="$contrasenya"
done < sitios-coma.csv
It returns 'Incorrect login'.
Anyone knows what i am doing wrong?
Related
I want to add sudo infront of every command(like the title says) but I don't want to make sudo - i / sudo su -
I want that you enter a "sudo mode" but you can keep working like a normal user.
So I can perfectly implement my /etc/sudoers file in my system.
I had following idea: (its a concept not finished it has many flaws i dislike but maybe it helps to get an idea):
#!/bin/bash
prompt:"sudo mode prompt>"
while : ; do
read -e -p "$prompt" command
$(sudo $command)
done
I don't like that variant so do you know some scripts/programs?
This should get what you want to achieve :
#!/bin/bash
prompt="sudo mode prompt>"
while : ; do
read -p "$prompt" -r command
sudo bash -c "$command"
done
I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.
I want to transfer a file from my server to another.The network between these servers isn't very well,so I want to use lftp to speed up.My script is like this:
lftp -u user,password -e "set sftp:connect-program 'ssh -a -x -i /key'; mirror --use-pget=5 -i data.tar.gz -r -R /data/ /tmp; quit" sftp://**.**.**.**:22
I found data.tar.gz wasn't segmented, But When I use it to download a file, that will works.
What should I do?
Segmented uploads are not implemented in lftp. If you have ssh access to the server, login there and use lftp to download the file. If there were many files, you could also upload different files in parallel using -P mirror option.
This seems to be a simple issue but, I'm not able to figure it out. I am trying to run a couple of small scripts on a server and i'm having issues with that. i have an allhosts file that has the list of servers which is in the same location as that of the .sh file.
script to create a directory structure across all the 20 servers with 777 permissions
#!bin/bash
for q in `cat allhosts`
do
ssh $q "mkdir -p /opt/acd/hgf/tom/hanks/"
chmod -R 777 $q "/opt/acd/hgf/tom/hanks/" >/dev/null 2>&1
done
in the above script, it is only creating the directory paths and not changing the permissions for that path. I tried running that chmod command in a separate script, but no use..
script to scp the contents of hanks to the hanks folder created in the new server.
#!bin/bash
for q in `cat allhosts`
do
scp /opt/acd/hgf/tom/hanks/* $q:/opt/acd/hgf/tom/hanks/ >/dev/null 2>&1
done
in this script too, when i run it, its not copying anything to any of the servers.
i know this is a very small issue, but please check and let me know where I am going wrong. thanks in advance..
The first script is failing because it is running the chmod on the local machine. You should run it on the remote machine via ssh - you could combine this with the other ssh invocation as follows:
ssh $q "mkdir -p /opt/acd/hgf/tom/hanks/ ; chmod -R 777 /opt/acd/hgf/tom/hanks/"
I'd guess the second script is failing because the first script isn't setting permissions; it looks okay.
I'm writing a UNIX shell function that is going to execute a command that will prompt the user for a password. I want to hard-code the password into the script and provide it to the command. I've tried piping the password into the command like this:
function() {
echo "password" | command
}
This may not work for some commands as the command may flush the input buffer before prompting for the password.
I've also tried redirecting standard input to a file containing the password like this, but that doesn't work either:
function() {
echo "password" > pass.tmp
command < pass.tmp
rm pass.tmp
}
I know that some commands allow for the password to be provided as an argument, but I'd rather go through standard input.
I'm looking for a quick and dirty way of piping a password into a command in bash.
How to use autoexpect to pipe a password into a command:
These steps are illustrated with an Ubuntu 12.10 desktop. The exact commands for your distribution may be slightly different.
This is dangerous because you risk exposing whatever password you use to anyone who can read the autoexpect script file.
DO NOT expose your root password or power user passwords by piping them through expect like this. Root kits WILL find this in an instant and your box is owned.
EXPECT spawns a process, reads text that comes in then sends text predefined in the script file.
Make sure you have expect and autoexpect installed:
sudo apt-get install expect
sudo apt-get install expect-dev
Read up on it:
man expect
man autoexpect
Go to your home directory:
cd /home/el
User el cannot chown a file to root and must enter a password:
touch testfile.txt
sudo chown root:root testfile.txt
[enter password to authorize the changing of the owner]
This is the password entry we want to automate. Restart the terminal to ensure that sudo asks us for the password again. Go to /home/el again and do this:
touch myfile.txt
autoexpect -f my_test_expect.exp sudo chown root:root myfile.txt
[enter password which authorizes the chown to root]
autoexpect done, file is my_test_expect.exp
You have created my_test_expect.exp file. Your super secret password is stored plaintext in this file. This should make you VERY uncomfortable. Mitigate some discomfort by restricting permissions and ownership as much as possible:
sudo chown el my_test_expect.exp //make el the owner.
sudo chmod 700 my_test_expect.exp //make file only readable by el.
You see these sorts of commands at the bottom of my_test_expect.exp:
set timeout -1
spawn sudo chown root:root myfile.txt
match_max 100000
expect -exact "\[sudo\] password for el: "
send -- "YourPasswordStoredInPlaintext\r"
expect eof
You will need to verify that the above expect commands are appropriate. If the autoexpect script is being overly sensitive or not sensitive enough then it will hang. In this case it's acceptable because the expect is waiting for text that will always arrive.
Run the expect script as user el:
expect my_test_expect.exp
spawn sudo chown root:root myfile.txt
[sudo] password for el:
The password contained in my_test_expect.exp was piped into a chown to root by user el. To see if the password was accepted, look at myfile.txt:
ls -l
-rw-r--r-- 1 root root 0 Dec 2 14:48 myfile.txt
It worked because it is root, and el never entered a password. If you expose your root, sudo, or power user password with this script, then acquiring root on your box will be easy. Such is the penalty for a security system that lets everybody in no questions asked.
Take a look at autoexpect (decent tutorial HERE). It's about as quick-and-dirty as you can get without resorting to trickery.
You can use the -S flag to read from std input. Find below an example:
function shutd()
{
echo "mySuperSecurePassword" | sudo -S shutdown -h now
}
Secure commands will not allow this, and rightly so, I'm afraid - it's a security hole you could drive a truck through.
If your command does not allow it using input redirection, or a command-line parameter, or a configuration file, then you're going to have to resort to serious trickery.
Some applications will actually open up /dev/tty to ensure you will have a hard time defeating security. You can get around them by temporarily taking over /dev/tty (creating your own as a pipe, for example) but this requires serious privileges and even it can be defeated.
with read
Here's an example that uses read to get the password and store it in the variable pass. Then, 7z uses the password to create an encrypted archive:
read -s -p "Enter password: " pass && 7z a archive.zip a_file -p"$pass"; unset pass
But be aware that the password can easily be sniffed.
Programs that prompt for passwords usually set the tty into "raw" mode, and read input directly from the tty. If you spawn the subprocess in a pty you can make that work. That is what Expect does...
Simply use :
echo "password" | sudo -S mount -t vfat /dev/sda1 /media/usb/;
if [ $? -eq 0 ]; then
echo -e '[ ok ] Usb key mounted'
else
echo -e '[warn] The USB key is not mounted'
fi
This code is working for me, and its in /etc/init.d/myscriptbash.sh
That's a really insecure idea, but:
Using the passwd command from within a shell script