How to use a password in a gpg command - linux

I am new to linux and I am trying to do the following:
I have a folder, that I want first to compress in a tar then to encrypt it with gpg and all of this is while running a command (no interactive GUI with the machine). All I find now in the internet are GUIs that show up and ass you to type the password
tar -cvf archive.tar <directory>
gpg -c archive.tar <target-directory>
I keep reading the options of the gpg command but i don't understand how to find the equivalent of 7za :
7za a -p <my-password>
this is what I want only to insert the password as part of the command.
Do you have any pointers?
thank you

Related

How to access specific path in linux using shellscript

Let us consider an example,
scriptPath=/home/sharath/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
In the above line of code, If user is "sharath" then he can access a file/folder same way if the user is different how can access that folder/file dynamically.
below is my shellscript(.sh file):
#!/bin/bash
set -eu
configLocation=/etc/atollic
scriptPath=/home/sharath/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
family=STM32
arch=x86_64
version=9.2.0
configFile=${configLocation}/TrueSTUDIO_for_${family}_${arch}_${version}.properties
installPath=/opt/Atollic_TrueSTUDIO_for_${family}_${arch}_${version}/
mkdir -p /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
tar xzf ${scriptPath}/install.data -C /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
In last line of the script, ${scriptPath} is diffrent for diffrent user, how can handle in shell script.
Update 1:
if i use, ${USER} or ${HOME} or whoami which returns "root" ,
Here is my log:
tar (child): /root/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer/install.data: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now
Update 2:
Currently user in "root"
Use $HOME for the start of scriptPath, i.e:
scriptPath=${HOME}/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
I tried with couple of way and finally i found with below solution-
Use below script for the
users
myuser=$(users)
echo "The user is " $myuser
Here users returns current user name.
Your script become:
#!/bin/bash
users
myuser=$(users)
set -eu
configLocation=/etc/atollic
scriptPath=/home/$myuser/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
family=STM32
arch=x86_64
version=9.2.0
configFile=${configLocation}/TrueSTUDIO_for_${family}_${arch}_${version}.properties
installPath=/opt/Atollic_TrueSTUDIO_for_${family}_${arch}_${version}/
mkdir -p /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
tar xzf ${scriptPath}/install.data -C /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
Thanks for answered my question.
Dynamic_Path="/home/$(whoami)/$SCRIPT_PATH"
What is the Linux OS you are using?
You can simply use as below,
scriptPath=~/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
where ~ refers to the home directory of the user. i.e. /home/sarath
One other way is to use it like below,
scriptPath="/home/whoami/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer"

Parametrized Wget FTP recursive download

I am trying to make a simple script in order to create automatic FTP backups reading domain, user and password from a CSV. I am using Wget command like this, and if i launch it from console, it works out of the box:
wget -r -P ./directory ftp://domain.com --ftp-user=myuser --ftp-password=mypassword
The problem occurs when i parameterize that command into a bash script to use it for many websites:
#!/bin/sh
#Read CSV.
while IFS=, read dominio usuario contrasenya
do
echo "Realizando Backup de: $dominio $usuario $contrasenya"
#Crear el backup del sitio web.
wget -r -P ./ ftp://"$dominio" --ftp-user="$usuario" --ftp-password="$contrasenya"
done < sitios-coma.csv
It returns 'Incorrect login'.
Anyone knows what i am doing wrong?

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

How to pass password to scp?

I know it is not recommended, but is it at all possible to pass the user's password to scp?
I'd like to copy a file via scp as part of a batch job and the receiving server does, of course, need a password and, no, I cannot easily change that to key-based authentication.
Use sshpass:
sshpass -p "password" scp -r user#example.com:/some/remote/path /some/local/path
or so the password does not show in the bash history
sshpass -f "/path/to/passwordfile" scp -r user#example.com:/some/remote/path /some/local/path
The above copies contents of path from the remote host to your local.
Install :
ubuntu/debian
apt install sshpass
centos/fedora
yum install sshpass
mac w/ macports
port install sshpass
mac w/ brew
brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb
just generate a ssh key like:
ssh-keygen -t rsa -C "your_email#youremail.com"
copy the content of ~/.ssh/id_rsa.pub
and lastly add it to the remote machines ~/.ssh/authorized_keys
make sure remote machine have the permissions 0700 for ~./ssh folder and 0600 for ~/.ssh/authorized_keys
If you are connecting to the server from Windows, the Putty version of scp ("pscp") lets you pass the password with the -pw parameter.
This is mentioned in the documentation here.
curl can be used as a alternative to scp to copy a file and it supports a password on the commandline.
curl --insecure --user username:password -T /path/to/sourcefile sftp://desthost/path/
You can script it with a tool like expect (there are handy bindings too, like Pexpect for Python).
You can use the 'expect' script on unix/terminal
For example create 'test.exp' :
#!/usr/bin/expect
spawn scp /usr/bin/file.txt root#<ServerLocation>:/home
set pass "Your_Password"
expect {
password: {send "$pass\r"; exp_continue}
}
run the script
expect test.exp
I hope that helps.
You may use ssh-copy-id to add ssh key:
$which ssh-copy-id #check whether it exists
If exists:
ssh-copy-id "user#remote-system"
Here is an example of how you do it with expect tool:
sub copyover {
$scp = Expect->spawn("/usr/bin/scp ${srcpath}/$file $who:${destpath}/$file");
$scp->expect(30,"ssword: ") || die "Never got password prompt from $dest:$!\n";
print $scp 'password' . "\n";
$scp->expect(30,"-re",'$\s') || die "Never got prompt from parent system:$!\n";
$scp->soft_close();
return;
}
Nobody mentioned it, but Putty scp (pscp) has a -pw option for password.
Documentation can be found here: https://the.earth.li/~sgtatham/putty/0.67/htmldoc/Chapter5.html#pscp
Once you set up ssh-keygen as explained above, you can do
scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
If you want to lessen typing each time, you can modify your .bash_profile file and put
alias remote_scp='scp -i ~/.ssh/id_rsa /local/path/to/file remote#ip.com:/path/in/remote/server/
Then from your terminal do source ~/.bash_profile. Afterwards if you type remote_scp in your terminal it should run the scp command without password.
Here's a poor man's Linux/Python/Expect-like example based on this blog post: Upgrading simple shells to fully interactive
TTYs. I needed this for old machines where I can't install Expect or add modules to Python.
Code:
(
echo 'scp jmudd#mysite.com:./install.sh .'
sleep 5
echo 'scp-passwd'
sleep 5
echo 'exit'
) |
python -c 'import pty; pty.spawn("/usr/bin/bash")'
Output:
scp jmudd#mysite.com:install.sh .
bash-4.2$ scp jmudd#mysite.com:install.sh .
Password:
install.sh 100% 15KB 236.2KB/s 00:00
bash-4.2$ exit
exit
Make sure password authentication is enabled on the target server. If it runs Ubuntu, then open /etc/ssh/sshd_config on the server, find lines PasswordAuthentication=no and comment all them out (put # at the start of the line), save the file and run sudo systemctl restart ssh to apply the configuration. If there is no such line then you're done.
Add -o PreferredAuthentications="password" to your scp command, e.g.:
scp -o PreferredAuthentications="password" /path/to/file user#server:/destination/directory
make sure you have "expect" tool before, if not, do it
# apt-get install expect
create the a script file with following content. (# vi /root/scriptfile)
spawn scp /path_from/file_name user_name_here#to_host_name:/path_to
expect "password:"
send put_password_here\n;
interact
execute the script file with "expect" tool
# expect /root/scriptfile
copy files from one server to other server ( on scripts)
Install putty on ubuntu or other Linux machines. putty comes with pscp. we can copy files with pscp.
apt-get update
apt-get install putty
echo n | pscp -pw "Password#1234" -r user_name#source_server_IP:/copy_file_path/files /path_to_copy/files
For more options see pscp help.
Using SCP non interactively from Windows:
Install the community Edition of netcmdlets
Import Module
Use Send-PowerShellServerFile -AuthMode password -User MyUser -Password not-secure -Server YourServer -LocalFile C:\downloads\test.txt -RemoteFile C:\temp\test.txt for sending File with non-interactive password
In case if you observe a strict host key check error then use -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null options.
The complete example is as follows
sshpass -p "password" scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#domain-name.com:/tmp/from/psoutput /tmp/to/psoutput
You can use below steps. This works for me!
Step1-
create a normal file suppose "fileWithScpPassword" which contains the ssh password for the destination server.
Step2- use sshpaas -f followed by password file name and then normal scp command.
sshpass -f "fileWithScpPassword" scp /filePathToUpload user#ip:/destinationPath/
One easy way I do this:
Use the same scp cmd as you use with ssh keys i.e
scp -C -i <path_to opens sshkey> <'local file_path'> user#<ip_address_VM>: <'remote file_path’>
for transferring file from local to remote
but instead of providing the correct <path_to_opensshkey>, use some garbage path. Due to wrong key path you will be asked for password instead and you can simply pass the password now to get the work done!
An alternative would be add the public half of the user's key to the authorized-keys file on the target system. On the system you are initiating the transfer from, you can run an ssh-agent daemon and add the private half of the key to the agent. The batch job can then be configured to use the agent to get the private key, rather than prompting for the key's password.
This should be do-able on either a UNIX/Linux system or on Windows platform using pageant and pscp.
All the solutions mentioned above can work only if you the app installed or you should have the admin rights to install except or sshpass.
I found this very useful link to simply start the scp in Background.
$ nohup scp file_to_copy user#server:/path/to/copy/the/file > nohup.out 2>&1
https://charmyin.github.io/scp/2014/10/07/run-scp-in-background/
I found this really helpful answer here.
rsync -r -v --progress -e ssh user#remote-system:/address/to/remote/file /home/user/
Not only you can pass there the password, but also it will show the progress bar when copying. Really awesome.

Remote Linux server to remote linux server dir copy. How? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best way to copy a directory (with sub-dirs and files) from one remote Linux server to another remote Linux server? I have connected to both using SSH client (like Putty). I have root access to both.
There are two ways I usually do this, both use ssh:
scp -r sourcedir/ user#dest.com:/dest/dir/
or, the more robust and faster (in terms of transfer speed) method:
rsync -auv -e ssh --progress sourcedir/ user#dest.com:/dest/dir/
Read the man pages for each command if you want more details about how they work.
I would modify a previously suggested reply:
rsync -avlzp /path/to/sfolder name#remote.server:/path/to/remote/dfolder
as follows:
-a (for archive) implies -rlptgoD so the l and p above are superfluous. I also like to include -H, which copies hard links. It is not part of -a by default because it's expensive. So now we have this:
rsync -aHvz /path/to/sfolder name#remote.server:/path/to/remote/dfolder
You also have to be careful about trailing slashes. You probably want
rsync -aHvz /path/to/sfolder/ name#remote.server:/path/to/remote/dfolder
if the desire is for the contents of the source "sfolder" to appear in the destination "dfolder". Without the trailing slash, an "sfolder" subdirectory would be created in the destination "dfolder".
rsync -avlzp /path/to/folder name#remote.server:/path/to/remote/folder
scp -r <directory> <username>#<targethost>:<targetdir>
Log in to one machine
$ scp -r /path/to/top/directory user#server:/path/to/copy
Use rsync so that you can continue if the connection gets broken. And if something changes you can copy them much faster too!
Rsync works with SSH so your copy operation is secure.
Try unison if the task is recurring.
http://www.cis.upenn.edu/~bcpierce/unison/
I used rdiffbackup http://www.nongnu.org/rdiff-backup/index.html because it does all you need without any fancy options. It's based on the rsync algorithm.
If you only need to copy one time, you can later remove the rdiff-backup-data directory on the destination host.
rdiff-backup user1#host1::/source-dir user2#host2::/dest-dir
from the doc:
rdiff-backup also preserves
subdirectories, hard links, dev files,
permissions, uid/gid ownership,
modification times, extended
attributes, acls, and resource forks.
which is an bonus to the scp -p proposals as the -p option does not preserve all (e.g. rights on directories are set badly)
install on ubuntu:
sudo apt-get install rdiff-backup
Check out scp or rsync,
man scp
man rsync
scp file1 file2 dir3 user#remotehost:path
Well, quick answer would to take a look at the 'scp' manpage, or perhaps rsync - depending exactly on what you need to copy. If you had to, you could even do tar-over-ssh:
tar cvf - | ssh server tar xf -
I think you can try with:
rsync -azvu -e ssh user#host1:/directory/ user#host2:/directory2/
(and I assume you are on host0 and you want to copy from host1 to host2 directly)
If the above does not work, you could try:
ssh user#host1 "/usr/bin/rsync -azvu -e ssh /directory/ user#host2:/directory2/"
in the this, it would work, if you already have setup passwordless SSH login from host1 to host2
scp will do the job, but there is one wrinkle: the connection to the second remote destination will use the configuration on the first remote destination, so if you use .ssh/config on the local environment, and you expect rsa and dsa keys to work, you have to forward your agent to the first remote host.
As non-root user ideally:
scp -r src $host:$path
If you already some of the content on $host consider using rsync with ssh as a tunnel.
/Allan
If you are serious about wanting an exact copy, you probably also want to use the -p switch to scp, if you're using that. I've found that scp reads from devices, and I've had problems with cpio, so I personally always use tar, like this:
cd /origin; find . -xdev -depth -not -path ./lost+found -print0 \
| tar --create --atime-preserve=system --null --files-from=- --format=posix \
--no-recursion --sparse | ssh targethost 'cd /target; tar --extract \
--overwrite --preserve-permissions --sparse'
I keep this incantation around in a file with various other means of copying files around. This one is for copying over SSH; the other ones are for copying to a compressed archive, for copying within the same computer, and for copying over an unencrypted TCP socket when SSH is too slow.
scp as mentioned above is usually a best way, but don't forget colon in the remote directory spec otherwise you'll get copy of source directory on local machine.
I like to pipe tar through ssh.
tar cf - [directory] | ssh [username]#[hostname] tar xf - -C [destination on remote box]
This method gives you lots of options. Since you should have root ssh disabled copying files for multiple user accounts is hard since you are logging into the remote server as a normal user. To get around this you can create a tar file on the remote box that still hold that preserves ownership.
tar cf - [directory] | ssh [username]#[hostname] "cat > output.tar"
For slow connections you can add compression, z for gzip or j for bzip2.
tar cjf - [directory] | ssh [username]#[hostname] "cat > output.tar.bz2"
tar czf - [directory] | ssh [username]#[hostname] "cat > output.tar.gz"
tar czf - [directory] | ssh [username]#[hostname] tar xzf - -C [destination on remote box]

Resources