SFTP failing with "Match Group" clause - linux

I am attempting to set up an sftp server on ubuntu/precise on EC2. I have been successful in adding a new user that can connect via ssh, however once I add the following clause:
Match Group sftp
ChrootDirectory /home/%u
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp
I can no longer connect (at all, ssh or otherwise) and I get the message
Error: Connection refused
Error: Could not connect to server
I am able to connect with the subsystem set to:
#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp
Any idea why the ssh server is failing with this "Match" clause? Essentially, everything is working except for the "chroot" part.

Ok, solved the issue:
2 things were causing a problem
I had to move the "Match" Clause to the END of the file, it was in the middle
There was a permissions issue - found the answer elsewhere that fixed it
from: https://askubuntu.com/questions/134425/how-can-i-chroot-sftp-only-ssh-users-into-their-homes
"All this pain is thanks to several security issues as detailed here. Basically the chroot directory has to be owned by root and can't be any group-write access. Lovely. So you essentially need to turn your chroot into a holding cell and within that you can have your editable content.
sudo chown root /home/bob
sudo chmod go-w /home/bob
sudo mkdir /home/bob/writable
sudo chown bob:sftponly /home/bob/writable
sudo chmod ug+rwX /home/bob/writable
And bam, you can log in and write in /writable."

Make sure that /home and /home/%u are chowned to root:root.

Related

Unable to create / edit files as non-root through Samba mount

I'm trying to setup a code-server (vscode in browser) instance and read/write from a mounted samba share. Unfortunately when I try to add a file it gives me an error that I do not have permissions to read/write to that folder. When I try to add files with the same credentials on Windows it does work though. This is the error that VSCode gives me:
Unable to write file
'vscode-remote://localhost:8080/home/user/repository/test'
(NoPermissions (FileSystemError): Error: EACCES: permission denied,
open '/home/gmetitieri/user/test')
If I sudo touch file.txt then the file will be created and added. I already used chmod and added full access to the folder but it still won't work. Is this a credentials thing or am I missing something?
I already tried this answer but it still doesn't let me write as non-root
Edit: This is the command I used to mount the drive (just with different folder names and IP address):
sudo mount -t cifs -o rw,vers=3.0,credentials=/root/.examplecredentials //192.168.18.112/sharedDir /media/share
Considering "non-root through Samba", especially in new releases of OpenSuse (...15.3 -- 15.4), I do few movements into normal configuration panels (no sudo commands or anything technical).
Using Yast Firewall section -- For now (immediate solution):
I turn off the firewall, then see what you can turn on (after this) to keep the samba working with Microsoft Windows.
More details on how to do this with images on my website.
This happens when the directory on the Samba share does not have permission for non-root users.
In your smb4.conf file:
[test]
comment = Test share
path = /path/to/directory
force user = unixuser
valid users = sambauser
In this example, unixuser should be the owner of the files in /path/to/directory. The user logged into Samba in this example is a user called sambauser.

Linux AWS EC2 Permissions with rsync

I am running a default t2.nano ec2 linux ami. Nothing is changed on it. I am trying to rsync my local changes to the server. There is a permissions issue that I don't know enough about to fix.
My structure is as follows. I'm trying to push my work to the technology directory. The technology directory is mapped to a staging domain. i.e. technology.staging.com
:/var/www/html/technology
this is from the root, and it does work fine, it's the rsync that is failing.
when I push locally to that directory I get a "failed: Permission denied (13)" error.
I'm running an nginx server and assigned permissions to the www directory as follows:
sudo chown -R nginx:nginx /var/www
My user is ec2-user which is the normal default. Here is where I am tripped up. You can see the var directory is given root access.
You can see that the www directory then has permissions set to nginx so our server can access the files. I believe I need to add the ec2-user to this directory as well as the nginx user so that I can rsync my files there and the server will still have access I'm just unsure of how to do that.
As a test, I created a test directory at this location and it worked successfully.
:/home/ec2-user/test
you can see the permission here are set for the ec2-user which is why it works i'm sure.
Here's the command I'm running on my local machine to rsync my files which fails.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/var/www/html/technology
Here's the command that was working.
rsync -azP -e "ssh -i /Users/username/devwork/company/comp.pem" company_technology/ ec2-user#1.2.3.4:/home/ec2-user/test
I have done enough research and testing to know that it's a permissions error, I just can't figure out the right way to solve it. Do I need to create a group and assign both the nginx and ec2-user to the group and then give that group the same permissions level on the :/var directory.
Side note, what permissions level do I set for the chown to make these permissions that are currently set?
I have server config files in the :/etc/nginx/conf.d/ directory that map to the directories I create inside of :/var/www/html directory so I can have multiple sites hosted on the server.
So in this example, I have a config file at :/etc/nginx/conf.d/technology.conf which maps to the directory at :/var/www/html/technology
Thank you in advance, again, I do feel like I have put forth the research and effort to show that I've gone as far as I know how to do.
The answer made sense after I spent roughly a day playing around. You have to give access to both the ec2-user and the nginx group. I believe you never want to put a user in a group that involves the server itself, I think things would go south.
After changing the owner to both the ec2-user and nginx group, it still didn't work exactly the way I wanted it to. The reason was, I needed the nginx permissions to be updated to what they had when they were assigned the user role.
Basically, theec2-user had write permissions and the server did not. we wanted the user to have write permissions so they could rsync my local files to the directory on the server, and the nginx group needed the same level of permissions to display the pages. Now that I think about it, the nginx group may have only needed read permissions to display things, but this at least solved the problem for now.
Here is the command I ran on the server to update the ownership and the permissions, as well as the output.
modify ownership
sudo chown -R ec2-user:nginx :/var/www/html/technology
modify permissions
sudo chmod -R o=rwx,g+rwx,o-w technology
The end result looks like this
You can see the permissions match, and the ownership is as we expected. The only thing I have to figure out is after I rsync new files to the server, I need to run the previous code to update the permissions again. I'm sure that will come to me later, but I hope this helps anyone in the same situation.

process_usershare_file: stat of failed. Permission denied Samba

I created a shared folder using samba in ubuntu to enable windows machines can access it with the following command:
$ sudo net usershare add documents /home/developer/documents "Developer documents" everyone:F guest_ok=y
I give 777 permissions to the folder:
$ sudo chmod 0777 /home/developer/documents
And then I check what I've done
$ sudo net usershare info --long
When I want to see if the folder is visible from all windows machine, you can see. However, you cann't access that folder and get error of: "Permission Denied"
The message in: /var/log/samba/log.ip-domain is:
process_usershare_file: stat of /var/lib/samba/usershares/backuparsac failed. Permission denied
Then, I try to add some rules to my smb.conf
[documents]
comment = Documents for Developers
path = /home/developer/documents
browseable = yes
writable = yes
read only = yes
guest ok = yes
directory mask = 0777
but the error of Permission denied keeps coming. Is there anything else I need to do? I need this folder can be accessed by all windows machines.
NOTE: I use Ubuntu 14.04
The cause is that Samba does not synchronize its users with the system.
This solved the issue in my case, on Kubuntu 14.10:
sudo apt-get install libpam-smbpass
sudo service samba restart
If you don't want to synchronize users with PAM, simply add a user to Samba's password database:
sudo smbpasswd -a <user>
After that, the user will be able to open shared folders on the Samba machine.
Your configuration file seems to be fine.
I reckon there might be a permission issue in your parent folder.
I suggest you check /home and /home/developer both have 755 rather than 750 permission.
Then check sudo -u nobody ls /home/developer/documents.
If ls is successful, the samba is likely to work as you expected as well

Rsync mkdir permission denied

I am trying to use "Rsync" to copy my spark directory to all the slave machines by this command:
rsync -avL --progress /path/to/spark-0.9.0-incubating ubuntu#<Public_ip_of_slave>:/usr/local`
I am following the instructions on this site:
http://docs.sigmoidanalytics.com/index.php/Setup_hadoop_2.0.0-cdh4.2.0_and_spark_0.9.0_on_ubuntu_aws_cluster"
but I am facing an error which is permission denied to make the folders in the destination.
Can anyone help me?
The ubuntu user (which you are using for scp) does not have the appropriate directory permissions on /usr/local at the remote server.
Misconfiguration can result in security issues so changing the directory permission of /usr/local is not recommended. If you wish to do so run:
ssh ubuntu#remote-server 'sudo chown root:ubuntu /usr/local'
where remote-server is the hostname or IP of the remote server and assuming that ubuntu is an administrator. You may also allow all others to write to the directory:
ssh ubuntu#remote-server 'sudo chmod o+w /usr/local'
but this is more dangerous than the previous option.
Alternatively, you may copy it into your home directory first then issue a sudo command to move the files into /usr/local:
rsync -avL --progress /path/to/spark-0.9.0-incubating ubuntu#remote-server:~
ssh ubuntu#remote-server 'sudo mv ~/spark-0.9.0-incubating /usr/local'
~ will be expanded to the home directory of the user, which in this case is likely to be /home/ubuntu/.
Do remember to change the permissions of /usr/local/spark-0.9.0-incubating as appropriate to allow access to authorized users using the chmod command.

Setting up FTP on Amazon Cloud Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to set up FTP on Amazon Cloud Server, but without luck.
I search over net and there is no concrete steps how to do it.
I found those commands to run:
$ yum install vsftpd
$ ec2-authorize default -p 20-21
$ ec2-authorize default -p 1024-1048
$ vi /etc/vsftpd/vsftpd.conf
#<em>---Add following lines at the end of file---</em>
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
$ /etc/init.d/vsftpd restart
But I don't know where to write them.
Jaminto did a great job of answering the question, but I recently went through the process myself and wanted to expand on Jaminto's answer.
I'm assuming that you already have an EC2 instance created and have associated an Elastic IP Address to it.
Step #1: Install vsftpd
SSH to your EC2 server. Type:
> sudo yum install vsftpd
This should install vsftpd.
Step #2: Open up the FTP ports on your EC2 instance
Next, you'll need to open up the FTP ports on your EC2 server. Log in to the AWS EC2 Management Console and select Security Groups from the navigation tree on the left. Select the security group assigned to your EC2 instance. Then select the Inbound tab, then click Edit:
Add two Custom TCP Rules with port ranges 20-21 and 1024-1048. For Source, you can select 'Anywhere'. If you decide to set Source to your own IP address, be aware that your IP address might change if it is being assigned via DHCP.
Step #3: Make updates to the vsftpd.conf file
Edit your vsftpd conf file by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Disable anonymous FTP by changing this line:
anonymous_enable=YES
to
anonymous_enable=NO
Then add the following lines to the bottom of the vsftpd.conf file:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
Your vsftpd.conf file should look something like the following - except make sure to replace the pasv_address with your public facing IP address:
To save changes, press escape, then type :wq, then hit enter.
Step #4: Restart vsftpd
Restart vsftpd by typing:
> sudo /etc/init.d/vsftpd restart
You should see a message that looks like:
If this doesn't work, try:
> sudo /sbin/service vsftpd restart
Step #5: Create an FTP user
If you take a peek at /etc/vsftpd/user_list, you'll see the following:
# vsftpd userlist
# If userlist_deny=NO, only allow users in this file
# If userlist_deny=YES (default), never allow users in this file, and
# do not even prompt for a password.
# Note that the default vsftpd pam config also checks /etc/vsftpd/ftpusers
# for users that are denied.
root
bin
daemon
adm
lp
sync
shutdown
halt
mail
news
uucp
operator
games
nobody
This is basically saying, "Don't allow these users FTP access." vsftpd will allow FTP access to any user not on this list.
So, in order to create a new FTP account, you may need to create a new user on your server. (Or, if you already have a user account that's not listed in /etc/vsftpd/user_list, you can skip to the next step.)
Creating a new user on an EC2 instance is pretty simple. For example, to create the user 'bret', type:
> sudo adduser bret
> sudo passwd bret
Here's what it will look like:
Step #6: Restricting users to their home directories
At this point, your FTP users are not restricted to their home directories. That's not very secure, but we can fix it pretty easily.
Edit your vsftpd conf file again by typing:
> sudo vi /etc/vsftpd/vsftpd.conf
Un-comment out the line:
chroot_local_user=YES
It should look like this once you're done:
Restart the vsftpd server again like so:
> sudo /etc/init.d/vsftpd restart
All done!
Appendix A: Surviving a reboot
vsftpd doesn't automatically start when your server boots. If you're like me, that means that after rebooting your EC2 instance, you'll feel a moment of terror when FTP seems to be broken - but in reality, it's just not running!. Here's a handy way to fix that:
> sudo chkconfig --level 345 vsftpd on
Alternatively, if you are using redhat, another way to manage your services is by using this nifty graphic user interface to control which services should automatically start:
> sudo ntsysv
Now vsftpd will automatically start up when your server boots up.
Appendix B: Changing a user's FTP home directory
* NOTE: Iman Sedighi has posted a more elegant solution for restricting users access to a specific directory. Please refer to his excellent solution posted as an answer *
You might want to create a user and restrict their FTP access to a specific folder, such as /var/www. In order to do this, you'll need to change the user's default home directory:
> sudo usermod -d /var/www/ username
In this specific example, it's typical to give the user permissions to the 'www' group, which is often associated with the /var/www folder:
> sudo usermod -a -G www username
To enable passive ftp on an EC2 server, you need to configure the ports that your ftp server should use for inbound connections, then open a list of available ports for the ftp client data connections.
I'm not that familiar with linux, but the commands you posted are the steps to install the ftp server, configure the ec2 firewall rules (through the AWS API), then configure the ftp server to use the ports you allowed on the ec2 firewall.
So this step installs the ftp client (VSFTP)
> yum install vsftpd
These steps configure the ftp client
> vi /etc/vsftpd/vsftpd.conf
-- Add following lines at the end of file --
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=<Public IP of your instance>
> /etc/init.d/vsftpd restart
but the other two steps are easier done through the amazon console under EC2 Security groups. There you need to configure the security group that is assigned to your server to allow connections on ports 20,21, and 1024-1048
Thanks #clone45 for the nice solution. But I had just one important problem with Appendix b of his solution. Immediately after I changed the home directory to var/www/html then I couldn't connect to server through ssh and sftp because it always shows following errors
permission denied (public key)
or in FileZilla I received this error:
No supported authentication methods available (server: public key)
But I could access the server through normal FTP connection.
If you encountered to the same error then just undo the appendix b of #clone45 solution by set the default home directory for the user:
sudo usermod -d /home/username/ username
But when you set user's default home directory then the user have access to many other folders outside /var/www/http. So to secure your server then follow these steps:
1- Make sftponly group
Make a group for all users you want to restrict their access to only ftp and sftp access to var/www/html. to make the group:
sudo groupadd sftponly
2- Jail the chroot
To restrict access of this group to the server via sftp you must jail the chroot to not to let group's users to access any folder except html folder inside its home directory. to do this open /etc/ssh/sshd.config in the vim with sudo.
At the end of the file please comment this line:
Subsystem sftp /usr/libexec/openssh/sftp-server
And then add this line below that:
Subsystem sftp internal-sftp
So we replaced subsystem with internal-sftp. Then add following lines below it:
Match Group sftponly
ChrootDirectory /var/www
ForceCommand internal-sftp
AllowTcpForwarding no
After adding this line I saved my changes and then restart ssh service by:
sudo service sshd restart
3- Add the user to sftponly group
Any user you want to restrict their access must be a member of sftponly group. Therefore we join it to sftponly by:
sudo usermod -G sftponly username
4- Restrict user access to just var/www/html
To restrict user access to just var/www/html folder we need to make a directory in the home directory (with name of 'html') of that user and then mount /var/www to /home/username/html as follow:
sudo mkdir /home/username/html
sudo mount --bind /var/www /home/username/html
5- Set write access
If the user needs write access to /var/www/html, then you must jail the user at /var/www which must have root:root ownership and permissions of 755. You then need to give /var/www/html ownership of root:sftponly and permissions of 775 by adding following lines:
sudo chmod 755 /var/www
sudo chown root:root /var/www
sudo chmod 775 /var/www/html
sudo chown root:www /var/www/html
6- Block shell access
If you want restrict access to not access to shell to make it more secure then just change the default shell to bin/false as follow:
sudo usermod -s /bin/false username
Great Article... worked like a breeze on Amazon Linux AMI.
Two more useful commands:
To change the default FTP upload folder
Step 1:
edit /etc/vsftpd/vsftpd.conf
Step 2: Create a new entry at the bottom of the page:
local_root=/var/www/html
To apply read, write, delete permission to the files under folder so that you can manage using a FTP device
find /var/www/html -type d -exec chmod 777 {} \;
In case you have ufw enabled, remember add ftp:
> sudo ufw allow ftp
It took me 2 days to realise that I enabled ufw.
It will not be ok until you add your user to the group www by the following commands:
sudo usermod -a -G www <USER>
This solves the permission problem.
Set the default path by adding this:
local_root=/var/www/html
Don't forget to update your iptables firewall if you have one to allow the 20-21 and 1024-1048 ranges in.
Do this from /etc/sysconfig/iptables
Adding lines like this:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 20:21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 1024:1048 -j ACCEPT
And restart iptables with the command:
sudo service iptables restart
I've simplified clone45 steps:
Open the ports as he mentioned
sudo su
sudo yum install vsftpd
echo -n "Public IP of your instance: " && read publicip
echo -e "anonymous_enable=NO\npasv_enable=YES\npasv_min_port=1024\npasv_max_port=1048\npasv_address=$publicip\nchroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf
sudo /etc/init.d/vsftpd restart
I followed clone45's answer all the way to the end. A great article! Since I needed the FTP access to install plug-ins to one of my wordpress sites, I changed the home directory to /var/www/mysitename. Then I continued to add my ftp user to the apache(or www) group like this:
sudo usermod -a -G apache myftpuser
After this I still saw this error on WP's plugin installation page: "Unable to locate WordPress Content directory (wp-content)". Searched and found this solution on a wp.org Q&A session: https://wordpress.org/support/topic/unable-to-locate-wordpress-content-directory-wp-content and added the following to the end of wp-config.php:
if(is_admin()) {
add_filter('filesystem_method', create_function('$a', 'return "direct";' ));
define( 'FS_CHMOD_DIR', 0751 );
}
After this my WP plugin was installed successfully.
maybe worth mentioning in addition to clone45's answer:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
The vsftpd version that comes with Ubuntu 12.04 Precise does not
permit chrooted local users to write by default. By default you will
have this in /etc/vsftpd.conf:
chroot_local_user=YES
write_enable=YES
In order to allow local users to write, you need to add the following parameter:
allow_writeable_chroot=YES
Note:
Issues with write permissions may show up as following FileZilla errors:
Error: GnuTLS error -15: An unexpected TLS packet was received.
Error: Could not connect to server
References:
Fixing Write Permissions for Chrooted FTP Users in vsftpd
VSFTPd stopped working after update
In case you are getting 530 password incorrect
1 more step needed
in file /etc/shells
Add the following line
/bin/false
FileZila is good FTP tool to setup with Amazon Cloud.
Download FileZila client from https://filezilla-project.org/
Click on File -> Site Manager - >
New Site
Provide Host Name IP address of your amazon cloud location (Port if any)
Protocol - SFTP (May change based on your requirement)
Login Type - Normal (So system will not ask for password each time)
Provide user name and password.
Connect.
You need to do these step only 1 time, later it will upload content to the same IP address and same site.

Resources