Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have a directory with strange permissions ( drwxr-xr-x+ ) - trailing ( + ) as 11th character, which seems to force all the files and subdirectories to assume rwxrwxrwx permissions, Following is the permissions.
drwxr-x---+ 3 root root 4096 Dec 22 15:33 directory
I want to get rid of this trailing ( + ).
I have tried following .
chmod 755 directory/
chmod a-x directory/
chmod u=rwx,g=rw,o=x directory/
I have tried following as well :
sudo chmod u=rwx,g=rx,o-x,u-s,g-s directory/
Any help will be appreciated .
Thanks - I am stuck .
The trailing + signify that ACL, Access Control List, is set on the directory.
You can use getfacl to get the details
getfacl directory
Following output is from getfacl Codespace which have ACL set by setfacl -m u:umesh:rw Codespace.
Here setfacl is giving rw permission to Codespace directory for user umesh.
# file: Codespace/
# owner: root
# group: root
user::rwx
user:umesh:rw-
group::r-x
mask::rwx
other::r-x
and we can remove the ACL using setfacl, for example, for the above sample
setfacl -x u:umesh Codespace/
More details at man setfacl and man getfacl
The + when listing a file will signify extended permissions on the file. These permissions will be set with access control lists. If you run "getfacl directory" you will see the extended permissions on the directory.
Depending on how the access control lists are set up, to remove, run:
setfacl -x u:username directory
and/or
setfacl -x g:groupname directory
To remove the + from the listing, you may also need to run:
setfacl -x m directory
setfacl -b directory
Remove all extended ACL entries. The base ACL entries of the owner, group and others are retained.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am creating a shared folder for users in the 'development' group. I am having trouble coming up with a series of commands to use to do this I need to set the following permissions:
Only members of the development group can create files in it
Users can only delete the files and directories they create
Any new files/folders in the shared directory are associated with the group
Group owner can only read
Owner can read files, but others cannot have r/w access
What series of commands could I use to accomplish this?
I just cannot seem to get this right with chmod and , and when I login as my other users I keep on getting permission denied for viewing the folder or creating files even with sticky bit set.
Angellic Chords,
first you must state in your request if you have root privileges (login,sudo) to manipulate permissions in the filesystem.
Now you need split task into smaller blocks:
a. add users into developer group (dev_group - assumed already exists)
root# for user in (user1 user2 user3 ... usern)
do
usermod -a -G dev_group $user
done
b. create developer group directory
mkdir /some/path/to/developer/group/dir
c. assign permission on the folder: see doc
owner root.dev_group (root)
owner rwx -- can read, create, change into directory
group rwx -- can read, create, change into directory
other/world r-x -- can read, change into directory only (check if this desirable)
set SGID - newly created files/directories inherit group from directory
set 'stiky' bit - allows manipulate only own files/directories
chown root.dev_group [path to directory] # owner root.dev_group
chmod u=rwx,g=rwx,o=rx [path to directory] # user rwx; group rwx; other r-x
chmod g+s [path to directory] # SGID bit inherit group from directory for new files and directories
chmod +t [path to directory] # stiky bit manipulate own files and directories only
or
chmod 3775 [path to directory]
NOTE: execute permission on a directory allows to change into the directory
d. define umask for each user:
user rwx
group r--
other ---
(in shell initialisation scripts as .bashrc .profile ....)
umask u=rwx,g=r,o=
NOTE: if umask must be different for any valid reason, then user has to change permission at creation, copy time on new files/directories
More grained access restrictions can be achieved with access control lists acl and SELinux contexts.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I want to copy letsencrypt folder from my remote ec2 machine to my local folder.
So I run this command:
scp -i key.pem -r ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:/etc/letsencrypt my-letsencrypt
Some files are copied but other failed with this error Permission denied:
scp: /etc/letsencrypt/archive: Permission denied
scp: /etc/letsencrypt/keys: Permission denied
I want to avoid to change ec2 files permissions.
What can I do to copy this folder to my local filesystem?
You are logging in with the account ubuntu on the server, but that account doesn't have the correct permission to read (and therefore) copy all the files. Most likely some of the files are owned by root and are not readable by others.
You can check the permission yourself with ls -l /etc/letsencrypt.
To copy the files anyway, here's two options:
1. Make a readable copy
on the remote server (logged in via SSH), you can make a copy of the folder, and change the permissions of the files:
sudo cp -r /etc/letsencrypt ~/letsencrypt-copy
sudo chown -R ubuntu:ubuntu ~/letsencrypt-copy
Now you can copy the files from there:
scp -i key.pem -r ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:letsencrypt-copy my-letsencrypt
2. copy from root
If you have ssh access on the root account, then just copy using that account:
scp -r root#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:letsencrypt-copy my-letsencrypt
Here you need public read permission
- First SSH to your remote server ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com
sudo su - (make sure you are a root user)
chmod -R 0744 /etc/letsencrypt
now try to download again with SCP again
after download put back permissions to 0700
chmod -R 0700 /etc/letsencrypt
Check the file permissions for archive & keys. It should be 400. Just change to 600. After the change, try copying again.
chmod -R 600 ./archive ./keys
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I need to create a user which can only SFTP to specific directory and take a copy of some infomation. that is it. I keep looking online and they bring up information about chroot and modifying the the sshd_config.
So far I can just
add the user "useradd sftpexport"
create it without a home directory "-M"
set its login location "-d /u02/export/cdrs" (Where the information is stored)
not allow it to use ssh "-s /bin/false"
useradd sftpexport -M -d /u02/export/cdrs -s /bin/false
Can anyone suggest what am meant to edit so the user can only login and copy the file off?
I prefer to create a user group sftp and restrict users in that group to their home directory.
First, edit your /etc/ssh/sshd_config file and add this at the bottom.
Match Group sftp
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
This tells OpenSSH that all users in the sftp group are to be chrooted to their home directory (which %h represents in the ChrootDirectory command)
Add a new sftp group, add your user to the group, restrict him from ssh access and define his home directory.
groupadd sftp
usermod username -g sftp
usermod username -s /bin/false
usermod username -d /home/username
Restart ssh:
sudo service ssh restart
If you are still experiencing problems, check that the directory permissions are correct on the home directory. Adjust the 755 value appropriately for your setup.
sudo chmod 755 /home/username
EDIT: Based on the details of your question, it looks like you are just missing the sshd_config portion. In your case, substitute sftp with sftpexport. Also be sure that the file permissions are accessible on the /u02/export/cdrs directory.
An even better setup (and there are even better setups than what I am about to propose) is to symlink the /u02/export/cdrs directory to the user home directory.
You could need to add a restricted shell for this user can put some files there. You can use rssh tool for that.
usermod -s /usr/bin/rssh sftpexport
Enable allowed protocols in config /etc/rssh.conf.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have Linux VPS and few accounts there. I used SSH with root logins to copy files from one account to another (e.g. in this folder
/home/firstacc/public_html/forum I typed cp -R * /home/secondacc/public_html/community).
Now when I use regular FTP to edit files on secondacc - I can't modify it - SmartFTP says permission denied. Now how do change ownership or permissions so they can be edited via regular FTP ?
use chmod to set the permissions (but be careful not to allow any wild process to modify your files) and chown/chgrp to change ownership/group-membership of your file.
ideally you would create a group (i call it 'fancyhomepage') where both users are members thereof:
# addgroup fancyhomepage
# adduser firstacc fancyhomepage
# adduser secondacc fancyhomepage
then make sure that all files you want to share belong to this group and are group-writeable
$ chgrp -R fancyhomepage /home/secondacc/public_html/community/
$ chmod -R g+rwX /home/secondacc/public_html/community/
$ chown -R <user>:<org> on the directory changes the permissions for everything in the directory and below.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am setting up a Linux web server running apache. I uploaded and untared my web sites files. The files in the main directory are all visible when I am SSH'd into the system. However, I am blocked from all subdirectories.
If I write:
# cd images
Then I get the error:
-bash: cd: images: Permission denied
I am signed in as ec2-user. I untarred the stuff as ec2-user and I doubt there was any permissions in the tar file since I created the archive on a Windows system.
The weird thing is that I am the owner of this directory. Here is a snippet of the command:
ls -l
drw-rw-r-- 19 ec2-user ec2-user 4096 May 4 04:09 images
When I do "sudo su" and then type the command cd images everything is fine.
Why do I get "Permission denied" as ec2-user if I am the owner and have rw permission?
You need execute permission too:
chmod +x images
should take care of it. The execute permission for directories translates to a "traverse directory" permission.
It misses executable bit on the directory which is essential to be able to cd in there.
A quick fix would be to run in the directory where you unpacked your stuff:
# find . -type d | xargs chmod a+x
If you have directories with spaces in them, use the following:
# find . -type d -exec chmod a+x "{}" \;