I am trying to write a bash script to upload some files using lftp and need to set the umask to 002. I cant seem to figure out how this is done within the context of lftp.
lftp -c "open sftp://$STAGE_FTP_HOST
user $STAGE_FTP_USER $STAGE_FTP_PASS
cd web/content
mirror -P --only-newer --reverse --delete --verbose --exclude wp-content/uploads --exclude wp-content/cache --exclude .git* "
I have tried setting umask in /etc/pam.d/sshd, ~/.bashrc and /etc/ssh/sshd_config nothing has any effect.
To clarify I need to add group permission to files and folders on the remote machine. So instead of 755 i need 775 instead of 644 i need 664.
It seems like there is something specific to lftp that needs to be set that I am just completely missing.
lftp command chmod -R g+w . should do what you need (change permissions on the remote server).
For new uploads mirror --no-umask may also help, if local permissions are correct.
Related
I have 3 TB of data already copied with rsync.
My command:
rsync -avzP /home <dest-user#dest-server-ip>:/backup/
Unfortunately the file permissions were not preserved. How can I overwrite the owners at destination, so that I don't need to copy everything again?
There is the issue with the flag, use small -p not capital.
-p, --perms preserve permissions
If you want permissions to be preserved you must have root priviliges:
rsync -avzP /home root#<dest-server-ip>:/backup/
Or:
sudo rsync -avzp --rsync-path "sudo rsync" /home <dest-user#dest-server-ip>:/backup/
I am new to linux. I have a build.sh file which consists of a lot of mkdir commands and some rm commands. But as I have installed this new in my VB, each time I run the .sh file, it says "Permission Denied for creating directory" and fails.
So is there any way that I grant directory privileges to all users.
Can anyone help me with this
Add "sudo" in the beginning of the directory creation command i.e
sudo mkdir dir_name
The issue might be with the directory in which the mkdir command is being run.
Use the command ll or ls -l to check the directory permissions.
If your directory doesn't have write privilege for the current user, you can run
chmod -R u+w /path/to/directory
This might require you to use sudo if permission is denied.
If you want to enable it for all users, run
chmod -R ugo+w /path/to/directory
Alternatively, a quick fix would be to run the build.sh file as root
sudo /path/to/build.sh
However, this approach is not advised unless you always run it as root
I have to give access to some launcher inside "folder1".
Whenever a new folder is created inside "folder1", I have to again give the permissions by typing sudo chmod -R 0777 folder1. Is there a way that I could permanently enable 0777 for a particular folder. No matter how many new subfolders are created inside it.
I tried and it works. But I have to give the permissions again and again
sudo chmod -R 0777 folder1
You need to umask command in your ".profile" file in your home path. Then restart your session and create new folder. All the folders will get fill permission by default for all users.
Command $ umask 000
Open .profile file
Insert command "umask 000" in .profile file and save it.
Restart the session and create folders.
I will try setfacl to set ACLs, e.g.:
setfacl -m d:o::7,d:g::7,d:o::7 mydir/
setfacl -m o::7,g::7,o::7 mydir/
Actually, this way if you create directories under mydir they will have 0777 permissions and files will have 0666.
Hope it is useful
Cheers
Let us consider an example,
scriptPath=/home/sharath/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
In the above line of code, If user is "sharath" then he can access a file/folder same way if the user is different how can access that folder/file dynamically.
below is my shellscript(.sh file):
#!/bin/bash
set -eu
configLocation=/etc/atollic
scriptPath=/home/sharath/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
family=STM32
arch=x86_64
version=9.2.0
configFile=${configLocation}/TrueSTUDIO_for_${family}_${arch}_${version}.properties
installPath=/opt/Atollic_TrueSTUDIO_for_${family}_${arch}_${version}/
mkdir -p /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
tar xzf ${scriptPath}/install.data -C /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
In last line of the script, ${scriptPath} is diffrent for diffrent user, how can handle in shell script.
Update 1:
if i use, ${USER} or ${HOME} or whoami which returns "root" ,
Here is my log:
tar (child): /root/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer/install.data: Cannot open: No such file or directory tar (child): Error is not recoverable: exiting now
Update 2:
Currently user in "root"
Use $HOME for the start of scriptPath, i.e:
scriptPath=${HOME}/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
I tried with couple of way and finally i found with below solution-
Use below script for the
users
myuser=$(users)
echo "The user is " $myuser
Here users returns current user name.
Your script become:
#!/bin/bash
users
myuser=$(users)
set -eu
configLocation=/etc/atollic
scriptPath=/home/$myuser/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
family=STM32
arch=x86_64
version=9.2.0
configFile=${configLocation}/TrueSTUDIO_for_${family}_${arch}_${version}.properties
installPath=/opt/Atollic_TrueSTUDIO_for_${family}_${arch}_${version}/
mkdir -p /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
tar xzf ${scriptPath}/install.data -C /opt/Atollic_TrueSTUDIO_for_STM32_x86_64_9.2.0/
Thanks for answered my question.
Dynamic_Path="/home/$(whoami)/$SCRIPT_PATH"
What is the Linux OS you are using?
You can simply use as below,
scriptPath=~/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer
where ~ refers to the home directory of the user. i.e. /home/sarath
One other way is to use it like below,
scriptPath="/home/whoami/Downloads/Atollic_TrueSTUDIO_for_STM32_9.2.0_installer"
I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website